Test Report: Docker_Windows 10722

                    
                      c7a4943b7f0d76d9c8b3f286387c80f827f0f80d
                    
                

Test fail (30/176)

Order failed test Duration
23 TestOffline 1090.14
33 TestCertOptions 1147.17
34 TestDockerFlags 1989.98
35 TestForceSystemdFlag 1189.23
36 TestForceSystemdEnv 2020.19
40 TestErrorSpam 1702.16
59 TestFunctional/serial/MinikubeKubectlCmdDirectly 19.7
79 TestFunctional/parallel/DockerEnv 42.85
134 TestMultiNode/serial/RestartMultiNode 380.41
166 TestSkaffold 201.88
169 TestRunningBinaryUpgrade 3334.18
171 TestKubernetesUpgrade 3548.65
172 TestMissingContainerUpgrade 3566.06
174 TestPause/serial/Start 946.41
194 TestStartStop/group/old-k8s-version/serial/FirstStart 1025.48
196 TestStartStop/group/no-preload/serial/FirstStart 1546.28
198 TestStartStop/group/embed-certs/serial/FirstStart 1106.33
200 TestStartStop/group/default-k8s-different-port/serial/FirstStart 1473.17
202 TestStartStop/group/newest-cni/serial/FirstStart 1998.21
203 TestStartStop/group/old-k8s-version/serial/DeployApp 13.61
206 TestStartStop/group/old-k8s-version/serial/SecondStart 970.01
209 TestNetworkPlugins/group/auto/Start 1111.57
211 TestNetworkPlugins/group/false/Start 1122.97
213 TestStartStop/group/embed-certs/serial/SecondStart 735.29
214 TestStartStop/group/no-preload/serial/DeployApp 23.58
215 TestNetworkPlugins/group/cilium/Start 1162.47
217 TestNetworkPlugins/group/calico/Start 1153.81
219 TestStartStop/group/no-preload/serial/SecondStart 265.09
220 TestStartStop/group/default-k8s-different-port/serial/DeployApp 808.61
221 TestNetworkPlugins/group/custom-weave/Start 983.86
x
+
TestOffline (1090.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20210310201637-6496 --alsologtostderr -v=1 --memory=2000 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:54: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p offline-docker-20210310201637-6496 --alsologtostderr -v=1 --memory=2000 --wait=true --driver=docker: exit status 1 (15m0.0439767s)

                                                
                                                
-- stdout --
	* [offline-docker-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	* Starting control plane node offline-docker-20210310201637-6496 in cluster offline-docker-20210310201637-6496
	* Creating docker container (CPUs=2, Memory=2000MB) ...
	* Found network options:
	  - HTTP_PROXY=172.16.1.1:1
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - http_proxy=172.16.1.1:1
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - env HTTP_PROXY=172.16.1.1:1
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v4

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:16:37.979652    1312 out.go:239] Setting OutFile to fd 2044 ...
	I0310 20:16:37.981649    1312 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:16:37.981649    1312 out.go:252] Setting ErrFile to fd 3020...
	I0310 20:16:37.981649    1312 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:16:38.000660    1312 out.go:246] Setting JSON to false
	I0310 20:16:38.004672    1312 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":32863,"bootTime":1615374535,"procs":112,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:16:38.005643    1312 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:16:38.013662    1312 out.go:129] * [offline-docker-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:16:38.018729    1312 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:16:38.029636    1312 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:16:38.763652    1312 docker.go:119] docker version: linux-20.10.2
	I0310 20:16:38.793143    1312 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:16:40.023096    1312 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2297997s)
	I0310 20:16:40.024345    1312 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:50 SystemTime:2021-03-10 20:16:39.4767332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:40.029195    1312 out.go:129] * Using the docker driver based on user configuration
	I0310 20:16:40.029195    1312 start.go:276] selected driver: docker
	I0310 20:16:40.029840    1312 start.go:718] validating driver "docker" against <nil>
	I0310 20:16:40.030130    1312 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:16:41.184643    1312 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:16:42.313693    1312 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1290532s)
	I0310 20:16:42.315011    1312 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:46 SystemTime:2021-03-10 20:16:41.8059518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:42.315859    1312 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 20:16:42.316579    1312 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 20:16:42.316816    1312 cni.go:74] Creating CNI manager for ""
	I0310 20:16:42.316816    1312 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:16:42.316816    1312 start_flags.go:398] config:
	{Name:offline-docker-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:offline-docker-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:16:42.325825    1312 out.go:129] * Starting control plane node offline-docker-20210310201637-6496 in cluster offline-docker-20210310201637-6496
	I0310 20:16:43.197985    1312 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:16:43.198325    1312 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:16:43.198561    1312 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:16:43.199434    1312 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:16:43.199434    1312 cache.go:54] Caching tarball of preloaded images
	I0310 20:16:43.199434    1312 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 20:16:43.199434    1312 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 20:16:43.199434    1312 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\config.json ...
	I0310 20:16:43.199434    1312 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\config.json: {Name:mk3ff36c38fa88893b446ccd288c51767e5c4d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:16:43.224088    1312 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:16:43.224785    1312 start.go:313] acquiring machines lock for offline-docker-20210310201637-6496: {Name:mk3a42806157f95dddd9c6a37875907c82b13ff0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:16:43.224785    1312 start.go:317] acquired machines lock for "offline-docker-20210310201637-6496" in 0s
	I0310 20:16:43.224785    1312 start.go:89] Provisioning new machine with config: &{Name:offline-docker-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:offline-docker-20210310201637-6496 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 20:16:43.224785    1312 start.go:126] createHost starting for "" (driver="docker")
	I0310 20:16:43.230536    1312 out.go:150] * Creating docker container (CPUs=2, Memory=2000MB) ...
	I0310 20:16:43.233404    1312 start.go:160] libmachine.API.Create for "offline-docker-20210310201637-6496" (driver="docker")
	I0310 20:16:43.233404    1312 client.go:168] LocalClient.Create starting
	I0310 20:16:43.234161    1312 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 20:16:43.234789    1312 main.go:121] libmachine: Decoding PEM data...
	I0310 20:16:43.235346    1312 main.go:121] libmachine: Parsing certificate...
	I0310 20:16:43.235769    1312 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 20:16:43.236233    1312 main.go:121] libmachine: Decoding PEM data...
	I0310 20:16:43.236233    1312 main.go:121] libmachine: Parsing certificate...
	I0310 20:16:43.263018    1312 cli_runner.go:115] Run: docker network inspect offline-docker-20210310201637-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 20:16:44.109681    1312 cli_runner.go:162] docker network inspect offline-docker-20210310201637-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 20:16:44.135599    1312 network_create.go:240] running [docker network inspect offline-docker-20210310201637-6496] to gather additional debugging logs...
	I0310 20:16:44.135599    1312 cli_runner.go:115] Run: docker network inspect offline-docker-20210310201637-6496
	W0310 20:16:45.049371    1312 cli_runner.go:162] docker network inspect offline-docker-20210310201637-6496 returned with exit code 1
	I0310 20:16:45.049371    1312 network_create.go:243] error running [docker network inspect offline-docker-20210310201637-6496]: docker network inspect offline-docker-20210310201637-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20210310201637-6496
	I0310 20:16:45.049810    1312 network_create.go:245] output of [docker network inspect offline-docker-20210310201637-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20210310201637-6496
	
	** /stderr **
	I0310 20:16:45.067746    1312 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:16:45.869809    1312 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 20:16:45.880167    1312 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: offline-docker-20210310201637-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 20:16:45.889681    1312 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20210310201637-6496
	W0310 20:16:46.609867    1312 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20210310201637-6496 returned with exit code 1
	W0310 20:16:46.610927    1312 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 20:16:46.650439    1312 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 20:16:47.648125    1312 cli_runner.go:115] Run: docker volume create offline-docker-20210310201637-6496 --label name.minikube.sigs.k8s.io=offline-docker-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 20:16:48.435859    1312 oci.go:102] Successfully created a docker volume offline-docker-20210310201637-6496
	I0310 20:16:48.448864    1312 cli_runner.go:115] Run: docker run --rm --name offline-docker-20210310201637-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20210310201637-6496 --entrypoint /usr/bin/test -v offline-docker-20210310201637-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 20:16:53.112042    1312 cli_runner.go:168] Completed: docker run --rm --name offline-docker-20210310201637-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20210310201637-6496 --entrypoint /usr/bin/test -v offline-docker-20210310201637-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (4.6631933s)
	I0310 20:16:53.112042    1312 oci.go:106] Successfully prepared a docker volume offline-docker-20210310201637-6496
	I0310 20:16:53.113753    1312 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:16:53.113995    1312 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:16:53.113995    1312 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 20:16:53.127998    1312 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	I0310 20:16:53.132282    1312 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W0310 20:16:53.931531    1312 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 20:16:53.932452    1312 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 20:16:54.480781    1312 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.3483056s)
	I0310 20:16:54.481160    1312 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:59 SystemTime:2021-03-10 20:16:53.792592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:54.500560    1312 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 20:16:55.684889    1312 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1843332s)
	I0310 20:16:55.701628    1312 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-docker-20210310201637-6496 --name offline-docker-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20210310201637-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-docker-20210310201637-6496 --volume offline-docker-20210310201637-6496:/var --security-opt apparmor=unconfined --memory=2000mb --memory-swap=2000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 20:17:01.445733    1312 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-docker-20210310201637-6496 --name offline-docker-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20210310201637-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-docker-20210310201637-6496 --volume offline-docker-20210310201637-6496:/var --security-opt apparmor=unconfined --memory=2000mb --memory-swap=2000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (5.7438739s)
	I0310 20:17:01.455651    1312 cli_runner.go:115] Run: docker container inspect offline-docker-20210310201637-6496 --format={{.State.Running}}
	I0310 20:17:02.092120    1312 cli_runner.go:115] Run: docker container inspect offline-docker-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:02.748146    1312 cli_runner.go:115] Run: docker exec offline-docker-20210310201637-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 20:17:04.480006    1312 cli_runner.go:168] Completed: docker exec offline-docker-20210310201637-6496 stat /var/lib/dpkg/alternatives/iptables: (1.7315509s)
	I0310 20:17:04.480006    1312 oci.go:278] the created container "offline-docker-20210310201637-6496" has a running status.
	I0310 20:17:04.480300    1312 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa...
	I0310 20:17:05.085664    1312 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 20:17:07.643727    1312 cli_runner.go:115] Run: docker container inspect offline-docker-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:08.366660    1312 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 20:17:08.366660    1312 kic_runner.go:115] Args: [docker exec --privileged offline-docker-20210310201637-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 20:17:09.944739    1312 kic_runner.go:124] Done: [docker exec --privileged offline-docker-20210310201637-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.5780839s)
	I0310 20:17:09.951885    1312 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa...
	I0310 20:17:10.833254    1312 cli_runner.go:115] Run: docker container inspect offline-docker-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:11.537049    1312 machine.go:88] provisioning docker machine ...
	I0310 20:17:11.537483    1312 ubuntu.go:169] provisioning hostname "offline-docker-20210310201637-6496"
	I0310 20:17:11.557880    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:12.213957    1312 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:12.231123    1312 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55106 <nil> <nil>}
	I0310 20:17:12.231123    1312 main.go:121] libmachine: About to run SSH command:
	sudo hostname offline-docker-20210310201637-6496 && echo "offline-docker-20210310201637-6496" | sudo tee /etc/hostname
	I0310 20:17:14.381380    1312 main.go:121] libmachine: SSH cmd err, output: <nil>: offline-docker-20210310201637-6496
	
	I0310 20:17:14.389382    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:15.088099    1312 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:15.088668    1312 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55106 <nil> <nil>}
	I0310 20:17:15.088923    1312 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\soffline-docker-20210310201637-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-docker-20210310201637-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 offline-docker-20210310201637-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:17:16.253105    1312 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:17:16.253352    1312 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:17:16.253477    1312 ubuntu.go:177] setting up certificates
	I0310 20:17:16.253477    1312 provision.go:83] configureAuth start
	I0310 20:17:16.261038    1312 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-docker-20210310201637-6496
	I0310 20:17:16.901982    1312 provision.go:137] copyHostCerts
	I0310 20:17:16.902975    1312 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:17:16.903083    1312 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:17:16.904017    1312 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:17:16.917476    1312 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:17:16.917476    1312 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:17:16.917476    1312 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:17:16.921446    1312 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:17:16.921446    1312 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:17:16.922454    1312 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:17:16.924520    1312 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.offline-docker-20210310201637-6496 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube offline-docker-20210310201637-6496]
	I0310 20:17:17.144812    1312 provision.go:165] copyRemoteCerts
	I0310 20:17:17.154727    1312 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:17:17.166637    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:17.857090    1312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55106 SSHKeyPath:C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:18.363206    1312 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2074695s)
	I0310 20:17:18.363810    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:17:18.763551    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0310 20:17:19.256757    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0310 20:17:19.645880    1312 provision.go:86] duration metric: configureAuth took 3.3924135s
	I0310 20:17:19.646147    1312 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:17:19.664922    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:20.300320    1312 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:20.300613    1312 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55106 <nil> <nil>}
	I0310 20:17:20.300852    1312 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:17:21.302916    1312 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:17:21.303470    1312 ubuntu.go:71] root file system type: overlay
	I0310 20:17:21.304100    1312 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:17:21.316570    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:21.972423    1312 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:21.973201    1312 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55106 <nil> <nil>}
	I0310 20:17:21.973425    1312 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="HTTP_PROXY=172.16.1.1:1"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:17:22.771346    1312 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=HTTP_PROXY=172.16.1.1:1
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:17:22.786092    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:23.434286    1312 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:23.434286    1312 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55106 <nil> <nil>}
	I0310 20:17:23.434286    1312 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:17:31.883300    1312 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 20:17:22.747110000 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=HTTP_PROXY=172.16.1.1:1
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 20:17:31.883598    1312 machine.go:91] provisioned docker machine in 20.3464495s
	I0310 20:17:31.883829    1312 client.go:171] LocalClient.Create took 48.650028s
	I0310 20:17:31.883829    1312 start.go:168] duration metric: libmachine.API.Create for "offline-docker-20210310201637-6496" took 48.6505825s
	I0310 20:17:31.883829    1312 start.go:267] post-start starting for "offline-docker-20210310201637-6496" (driver="docker")
	I0310 20:17:31.884140    1312 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:17:31.900677    1312 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:17:31.908682    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:32.607257    1312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55106 SSHKeyPath:C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:32.953149    1312 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0524747s)
	I0310 20:17:32.972616    1312 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:17:33.060808    1312 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:17:33.061040    1312 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:17:33.061040    1312 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:17:33.061040    1312 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:17:33.061335    1312 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:17:33.061996    1312 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:17:33.065748    1312 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:17:33.067452    1312 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:17:33.083312    1312 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:17:33.132059    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:17:33.386121    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:17:33.615661    1312 start.go:270] post-start completed in 1.7315266s
	I0310 20:17:33.647417    1312 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-docker-20210310201637-6496
	I0310 20:17:34.346079    1312 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\config.json ...
	I0310 20:17:34.398557    1312 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:17:34.410633    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:35.219092    1312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55106 SSHKeyPath:C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:35.628583    1312 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2293768s)
	I0310 20:17:35.628812    1312 start.go:129] duration metric: createHost completed in 52.4041952s
	I0310 20:17:35.628812    1312 start.go:80] releasing machines lock for "offline-docker-20210310201637-6496", held for 52.4041952s
	I0310 20:17:35.645198    1312 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-docker-20210310201637-6496
	I0310 20:17:36.385011    1312 out.go:129] * Found network options:
	I0310 20:17:36.385011    1312 out.go:129]   - HTTP_PROXY=172.16.1.1:1
	W0310 20:17:36.385011    1312 out.go:191] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.17.0.4).
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.17.0.4).
	I0310 20:17:36.403530    1312 out.go:129] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I0310 20:17:36.413379    1312 out.go:129]   - http_proxy=172.16.1.1:1
	I0310 20:17:36.417016    1312 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:17:36.430115    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:36.433004    1312 ssh_runner.go:149] Run: systemctl --version
	I0310 20:17:36.444352    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:37.218877    1312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55106 SSHKeyPath:C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:37.260386    1312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55106 SSHKeyPath:C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:37.604224    1312 ssh_runner.go:189] Completed: systemctl --version: (1.1712231s)
	I0310 20:17:37.618944    1312 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:17:38.054659    1312 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:17:38.058003    1312 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6405565s)
	I0310 20:17:38.235867    1312 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:17:38.248616    1312 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:17:38.409996    1312 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:17:38.580214    1312 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:17:38.735975    1312 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:17:39.573385    1312 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 20:17:39.827820    1312 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:17:40.676402    1312 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 20:17:40.676402    1312 out.go:129]   - env HTTP_PROXY=172.16.1.1:1
	I0310 20:17:40.692645    1312 cli_runner.go:115] Run: docker exec -t offline-docker-20210310201637-6496 dig +short host.docker.internal
	I0310 20:17:42.044287    1312 cli_runner.go:168] Completed: docker exec -t offline-docker-20210310201637-6496 dig +short host.docker.internal: (1.3516457s)
	I0310 20:17:42.044836    1312 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:17:42.053133    1312 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:17:42.123714    1312 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:17:42.275694    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:17:42.960418    1312 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\client.crt
	I0310 20:17:42.963686    1312 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\client.key
	I0310 20:17:42.963686    1312 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:17:42.963686    1312 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:17:42.977945    1312 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:17:43.738006    1312 docker.go:423] Got preloaded images: 
	I0310 20:17:43.738891    1312 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 20:17:43.760867    1312 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:17:43.900294    1312 ssh_runner.go:149] Run: which lz4
	I0310 20:17:43.972011    1312 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 20:17:44.009974    1312 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 20:17:44.009974    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 20:24:07.366855    1312 docker.go:388] Took 383.415629 seconds to copy over tarball
	I0310 20:24:07.380002    1312 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 20:24:47.393123    1312 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (40.012827s)
	I0310 20:24:47.393123    1312 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 20:24:48.243077    1312 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:24:48.289875    1312 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 20:24:48.434092    1312 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:24:48.768650    1312 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:24:58.000100    1312 ssh_runner.go:189] Completed: sudo systemctl restart docker: (9.2315076s)
	I0310 20:24:58.010030    1312 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:24:58.834111    1312 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 20:24:58.834256    1312 cache_images.go:73] Images are preloaded, skipping loading
	I0310 20:24:58.844239    1312 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 20:24:59.977551    1312 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (1.1333196s)
	I0310 20:24:59.978347    1312 cni.go:74] Creating CNI manager for ""
	I0310 20:24:59.978347    1312 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:24:59.978347    1312 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 20:24:59.978347    1312 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.4 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:offline-docker-20210310201637-6496 NodeName:offline-docker-20210310201637-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.4 CgroupDriver:cgroupfs ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 20:24:59.978943    1312 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "offline-docker-20210310201637-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 20:24:59.979420    1312 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=offline-docker-20210310201637-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:offline-docker-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 20:24:59.992559    1312 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 20:25:00.097260    1312 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 20:25:00.109448    1312 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 20:25:00.187076    1312 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0310 20:25:00.479426    1312 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 20:25:00.795207    1312 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1860 bytes)
	I0310 20:25:01.048609    1312 ssh_runner.go:149] Run: grep 172.17.0.4	control-plane.minikube.internal$ /etc/hosts
	I0310 20:25:01.082686    1312 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:25:01.168380    1312 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496 for IP: 172.17.0.4
	I0310 20:25:01.170006    1312 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 20:25:01.170006    1312 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 20:25:01.171121    1312 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\client.key
	I0310 20:25:01.171121    1312 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.key.fb01c024
	I0310 20:25:01.171121    1312 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.crt.fb01c024 with IP's: [172.17.0.4 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 20:25:01.407871    1312 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.crt.fb01c024 ...
	I0310 20:25:01.407871    1312 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.crt.fb01c024: {Name:mk982a30db78e9d3bbcc61d42033a967c401e66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:01.421603    1312 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.key.fb01c024 ...
	I0310 20:25:01.421603    1312 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.key.fb01c024: {Name:mk5db4b42d0168df69ea0ba8dc71b464ea1e1c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:01.441854    1312 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.crt.fb01c024 -> C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.crt
	I0310 20:25:01.445818    1312 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.key.fb01c024 -> C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.key
	I0310 20:25:01.449858    1312 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\proxy-client.key
	I0310 20:25:01.449858    1312 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\proxy-client.crt with IP's: []
	I0310 20:25:02.111558    1312 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\proxy-client.crt ...
	I0310 20:25:02.111558    1312 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\proxy-client.crt: {Name:mkf911ebba9c12e0ed781e6c2a553e3985822930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:02.128651    1312 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\proxy-client.key ...
	I0310 20:25:02.128651    1312 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\proxy-client.key: {Name:mke4ce8947dd5da51ce097c0631f88d2cf0a87b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:02.152583    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 20:25:02.152583    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.152583    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 20:25:02.152583    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.152583    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 20:25:02.153583    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.153583    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 20:25:02.153583    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.153583    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 20:25:02.153583    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.153583    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 20:25:02.154581    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.154581    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 20:25:02.154581    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.154581    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 20:25:02.154581    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.154581    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 20:25:02.155583    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.155583    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 20:25:02.155583    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.155583    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 20:25:02.155583    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.156617    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 20:25:02.156617    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.156617    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 20:25:02.156617    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.156617    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 20:25:02.156617    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.157576    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 20:25:02.157576    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.157576    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 20:25:02.157576    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.157576    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 20:25:02.157576    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.157576    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 20:25:02.158588    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.158588    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 20:25:02.158588    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.158588    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 20:25:02.158588    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.158588    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 20:25:02.159636    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.159636    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 20:25:02.159636    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.159636    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 20:25:02.159636    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.159636    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 20:25:02.160608    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.160608    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 20:25:02.160608    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.160608    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 20:25:02.160608    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.160608    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 20:25:02.161648    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.161891    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 20:25:02.162245    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.162454    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 20:25:02.162454    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.162454    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 20:25:02.162833    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.163052    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 20:25:02.163693    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.163924    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 20:25:02.164133    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.164298    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 20:25:02.164474    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.164474    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 20:25:02.164888    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.165138    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 20:25:02.165607    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.166033    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 20:25:02.166438    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.166438    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 20:25:02.167088    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.167295    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 20:25:02.167695    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.167901    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 20:25:02.168199    1312 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.168394    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 20:25:02.168600    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 20:25:02.168824    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 20:25:02.168824    1312 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 20:25:02.181925    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 20:25:02.450649    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0310 20:25:02.878616    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 20:25:03.145530    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\offline-docker-20210310201637-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 20:25:03.339916    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 20:25:03.648947    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 20:25:03.924702    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 20:25:04.179278    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 20:25:04.526432    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 20:25:04.751038    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 20:25:05.004880    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 20:25:05.272241    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 20:25:05.559175    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 20:25:05.960207    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 20:25:06.250272    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 20:25:06.418919    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 20:25:06.640779    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 20:25:06.861199    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 20:25:07.085875    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 20:25:07.315195    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 20:25:07.684183    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 20:25:08.098006    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 20:25:08.390868    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 20:25:08.614566    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 20:25:08.810884    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 20:25:09.120288    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 20:25:09.305069    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 20:25:09.592011    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 20:25:09.829385    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 20:25:10.009399    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 20:25:10.203097    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 20:25:10.519410    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 20:25:10.821527    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 20:25:11.084319    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 20:25:11.449102    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 20:25:11.719971    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 20:25:11.920936    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 20:25:12.178463    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 20:25:12.400896    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 20:25:12.629080    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 20:25:12.990437    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 20:25:13.196042    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 20:25:13.534889    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 20:25:13.848627    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 20:25:14.105207    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 20:25:14.384312    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 20:25:14.690437    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 20:25:14.970828    1312 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 20:25:15.121972    1312 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 20:25:15.353083    1312 ssh_runner.go:149] Run: openssl version
	I0310 20:25:15.394881    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 20:25:15.714098    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 20:25:15.761681    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 20:25:15.783665    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 20:25:15.875409    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:15.964505    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 20:25:16.139662    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 20:25:16.192614    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 20:25:16.203301    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 20:25:16.301937    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:16.531412    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 20:25:16.788814    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 20:25:16.827098    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 20:25:16.850870    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 20:25:16.974505    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:17.088770    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 20:25:17.220903    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 20:25:17.352737    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 20:25:17.378792    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 20:25:17.473306    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:17.600092    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 20:25:17.794556    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 20:25:17.830826    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 20:25:17.841711    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 20:25:17.924303    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:18.052724    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 20:25:18.203346    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 20:25:18.247558    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 20:25:18.256549    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 20:25:18.340077    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:18.512269    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 20:25:18.630008    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 20:25:18.695071    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 20:25:18.716988    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 20:25:18.900854    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:18.991518    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 20:25:19.190164    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 20:25:19.271526    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 20:25:19.293546    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 20:25:19.377384    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:19.452207    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 20:25:19.604970    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 20:25:19.634576    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 20:25:19.649030    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 20:25:19.763413    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:19.889684    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 20:25:20.057189    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 20:25:20.078574    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 20:25:20.091671    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 20:25:20.163743    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:20.304647    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 20:25:20.466994    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 20:25:20.504821    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 20:25:20.517860    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 20:25:20.607473    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:20.699751    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 20:25:20.844817    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 20:25:20.892877    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 20:25:20.899427    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 20:25:21.020195    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:21.127384    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 20:25:21.322856    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 20:25:21.355760    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 20:25:21.439876    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 20:25:21.494057    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:21.589199    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 20:25:21.718540    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 20:25:21.751610    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 20:25:21.764957    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 20:25:21.996689    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:22.171411    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 20:25:22.320948    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 20:25:22.363115    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 20:25:22.374124    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 20:25:22.472609    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:22.659515    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 20:25:22.779855    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 20:25:22.837867    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 20:25:22.861948    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 20:25:22.983922    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:23.092689    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 20:25:23.248042    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 20:25:23.321831    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 20:25:23.354241    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 20:25:23.450567    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:23.576611    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 20:25:23.683559    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 20:25:23.729238    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 20:25:23.735247    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 20:25:23.859271    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:23.940776    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 20:25:24.083154    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 20:25:24.123285    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 20:25:24.134253    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 20:25:24.215001    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:24.312933    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 20:25:24.464644    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 20:25:24.505574    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 20:25:24.519294    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 20:25:24.609673    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:24.766686    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 20:25:24.902777    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 20:25:24.971148    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 20:25:24.984853    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 20:25:25.077445    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:25.205163    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 20:25:25.331886    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 20:25:25.380839    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 20:25:25.393829    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 20:25:25.461597    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:25.579129    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 20:25:25.705016    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 20:25:25.770982    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 20:25:25.785864    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 20:25:25.920520    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:26.126735    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 20:25:26.241612    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 20:25:26.291009    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 20:25:26.303204    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 20:25:26.417917    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:26.521265    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 20:25:26.694850    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 20:25:26.751716    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 20:25:26.771673    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 20:25:26.855507    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:27.011646    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 20:25:27.171238    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 20:25:27.221126    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 20:25:27.243201    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 20:25:27.334742    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:27.447902    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 20:25:27.513023    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 20:25:27.557961    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 20:25:27.568596    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 20:25:27.781576    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:27.896821    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 20:25:27.992317    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 20:25:28.034053    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 20:25:28.046122    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 20:25:28.137415    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:28.308820    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 20:25:28.403736    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 20:25:28.521104    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 20:25:28.537906    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 20:25:28.720494    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:28.843624    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 20:25:28.946597    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 20:25:28.988389    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 20:25:29.002320    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 20:25:29.088857    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:29.242337    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 20:25:29.521648    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 20:25:29.556602    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 20:25:29.568684    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 20:25:29.737438    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:29.956977    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 20:25:30.144580    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 20:25:30.196963    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 20:25:30.215123    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 20:25:30.310992    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:30.491676    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 20:25:30.743444    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 20:25:30.832428    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 20:25:30.843809    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 20:25:30.921973    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:31.049563    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 20:25:31.170961    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 20:25:31.236968    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 20:25:31.253772    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 20:25:31.313164    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:31.466569    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 20:25:31.639215    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 20:25:31.689698    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 20:25:31.711886    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 20:25:31.776014    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:31.907471    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 20:25:32.041801    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 20:25:32.073147    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 20:25:32.093492    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 20:25:32.160255    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:32.245695    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 20:25:32.394898    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:32.502840    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:32.512756    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:32.610171    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 20:25:32.714969    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 20:25:32.908426    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 20:25:33.008792    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 20:25:33.018761    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 20:25:33.217367    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:33.289600    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 20:25:33.391035    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 20:25:33.423599    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 20:25:33.440196    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 20:25:33.511700    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:33.639626    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 20:25:33.713824    1312 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 20:25:33.798467    1312 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 20:25:33.814514    1312 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 20:25:33.862418    1312 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:33.954347    1312 kubeadm.go:385] StartCluster: {Name:offline-docker-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:offline-docker-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.4 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:25:33.967789    1312 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:25:34.771672    1312 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 20:25:34.878930    1312 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:25:34.931947    1312 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:25:34.954243    1312 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:25:35.015480    1312 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:25:35.015480    1312 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 20:25:43.477645    1312 out.go:150]   - Generating certificates and keys ...
	I0310 20:26:19.697555    1312 out.go:150]   - Booting up control plane ...
	I0310 20:29:18.215171    1312 out.go:150]   - Configuring RBAC rules ...
	I0310 20:29:41.277957    1312 cni.go:74] Creating CNI manager for ""
	I0310 20:29:41.277957    1312 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:29:41.278380    1312 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0310 20:29:41.288982    1312 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:29:41.329743    1312 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=offline-docker-20210310201637-6496 minikube.k8s.io/updated_at=2021_03_10T20_29_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:29:42.516797    1312 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.238421s)
	I0310 20:29:42.518054    1312 ops.go:34] apiserver oom_adj: -16
	I0310 20:29:50.669614    1312 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (9.3806589s)
	I0310 20:29:50.691528    1312 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:29:59.583955    1312 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=offline-docker-20210310201637-6496 minikube.k8s.io/updated_at=2021_03_10T20_29_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (18.2542644s)
	I0310 20:30:05.267207    1312 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (14.5757188s)
	I0310 20:30:05.785067    1312 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:30:13.993475    1312 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (8.2084305s)
	I0310 20:30:14.289082    1312 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:30:24.857948    1312 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (10.5687422s)
	I0310 20:30:25.295460    1312 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:30:37.845264    1312 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (12.5498363s)
	I0310 20:30:38.287773    1312 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:31:01.773684    1312 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (23.4859697s)
	I0310 20:31:02.287974    1312 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:31:22.234336    1312 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (19.9453923s)
	I0310 20:31:22.234336    1312 kubeadm.go:995] duration metric: took 1m40.9562188s to wait for elevateKubeSystemPrivileges.
	I0310 20:31:22.234587    1312 kubeadm.go:387] StartCluster complete in 5m48.2814955s
	I0310 20:31:22.234587    1312 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:31:22.235272    1312 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	I0310 20:31:22.237124    1312 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:31:22.263616    1312 kapi.go:59] client config for offline-docker-20210310201637-6496: &rest.Config{Host:"https://127.0.0.1:55089", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\offline-docker-20210310201637-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\offline-docker-20210310201637-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgen
t:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 20:31:22.761563    1312 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "offline-docker-20210310201637-6496" rescaled to 1
	I0310 20:31:22.761686    1312 start.go:203] Will wait 6m0s for node up to 
	I0310 20:31:22.762354    1312 addons.go:381] enableAddons start: toEnable=map[], additional=[]
	I0310 20:31:22.763269    1312 addons.go:58] Setting storage-provisioner=true in profile "offline-docker-20210310201637-6496"
	I0310 20:31:22.763545    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 20:31:22.763545    1312 addons.go:134] Setting addon storage-provisioner=true in "offline-docker-20210310201637-6496"
	W0310 20:31:22.763900    1312 addons.go:143] addon storage-provisioner should already be in state true
	I0310 20:31:22.764456    1312 addons.go:58] Setting default-storageclass=true in profile "offline-docker-20210310201637-6496"
	I0310 20:31:22.764456    1312 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "offline-docker-20210310201637-6496"
	I0310 20:31:22.765268    1312 host.go:66] Checking if "offline-docker-20210310201637-6496" exists ...
	I0310 20:31:22.765562    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:31:22.765898    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:31:22.765898    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:31:22.765898    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:31:22.766243    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:31:22.766834    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:31:22.767076    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:31:22.767229    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:31:22.768117    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:31:22.788488    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:31:22.789599    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:31:22.822206    1312 out.go:129] * Verifying Kubernetes components...
	I0310 20:31:22.765562    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:31:22.765562    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:31:22.791786    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:31:22.791786    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 20:31:22.791786    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:31:22.791786    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:31:22.791786    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:31:22.791786    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:31:22.792348    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:31:22.792348    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:31:22.792580    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:31:22.792580    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:31:22.792580    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:31:22.793695    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 20:31:22.794623    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:31:22.794623    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:31:22.794623    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:31:22.795312    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:31:22.795312    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:31:22.810533    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:31:22.810533    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:31:22.810533    1312 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:31:23.005484    1312 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.005975    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	I0310 20:31:23.006345    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 242.4451ms
	I0310 20:31:23.006647    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	I0310 20:31:23.148595    1312 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.149659    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	I0310 20:31:23.149659    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 383.4175ms
	I0310 20:31:23.150324    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	I0310 20:31:23.329229    1312 cli_runner.go:115] Run: docker container inspect offline-docker-20210310201637-6496 --format={{.State.Status}}
	I0310 20:31:23.334305    1312 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 20:31:23.346614    1312 cli_runner.go:115] Run: docker container inspect offline-docker-20210310201637-6496 --format={{.State.Status}}
	I0310 20:31:23.566454    1312 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.568165    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	I0310 20:31:23.568165    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 802.2688ms
	I0310 20:31:23.568165    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	I0310 20:31:23.766513    1312 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.768499    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	I0310 20:31:23.778869    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 1.0129731s
	I0310 20:31:23.778869    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	I0310 20:31:23.795248    1312 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.795484    1312 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.795994    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	I0310 20:31:23.796975    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	I0310 20:31:23.797496    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.0086267s
	I0310 20:31:23.797496    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	I0310 20:31:23.799753    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.0322805s
	I0310 20:31:23.799753    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	I0310 20:31:23.917789    1312 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.921482    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	I0310 20:31:23.926952    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.1586013s
	I0310 20:31:23.926952    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	I0310 20:31:23.969173    1312 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.969500    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	I0310 20:31:23.969500    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.2032602s
	I0310 20:31:23.969500    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	I0310 20:31:23.988529    1312 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.988529    1312 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.990496    1312 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.990496    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	I0310 20:31:23.990496    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	I0310 20:31:23.990496    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	I0310 20:31:23.990496    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.2232693s
	I0310 20:31:23.990496    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	I0310 20:31:23.990496    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.2236645s
	I0310 20:31:23.990496    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	I0310 20:31:23.995377    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.2055544s
	I0310 20:31:23.995513    1312 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:23.995513    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	I0310 20:31:23.995513    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	I0310 20:31:23.996681    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.1744777s
	I0310 20:31:23.996681    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	I0310 20:31:24.006866    1312 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.013502    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	I0310 20:31:24.018494    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 1.1944631s
	I0310 20:31:24.018494    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	I0310 20:31:24.120744    1312 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.120744    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	I0310 20:31:24.121753    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 1.2983708s
	I0310 20:31:24.121753    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	I0310 20:31:24.142787    1312 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.143094    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	I0310 20:31:24.143782    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.3170658s
	I0310 20:31:24.144732    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	I0310 20:31:24.148965    1312 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.149563    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	I0310 20:31:24.150189    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.3839497s
	I0310 20:31:24.150325    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	I0310 20:31:24.153419    1312 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.154160    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	I0310 20:31:24.161146    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.3377642s
	I0310 20:31:24.161146    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	I0310 20:31:24.174153    1312 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.174746    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	I0310 20:31:24.175344    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.3523001s
	I0310 20:31:24.175344    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	I0310 20:31:24.217568    1312 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.217919    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	I0310 20:31:24.218278    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.3842915s
	I0310 20:31:24.218474    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	I0310 20:31:24.224382    1312 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.224613    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	I0310 20:31:24.225330    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.3965551s
	I0310 20:31:24.225330    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	I0310 20:31:24.233373    1312 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.234001    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	I0310 20:31:24.234575    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.4111928s
	I0310 20:31:24.234575    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	I0310 20:31:24.239738    1312 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.240336    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	I0310 20:31:24.241477    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.4072883s
	I0310 20:31:24.241477    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	I0310 20:31:24.250252    1312 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.250932    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	I0310 20:31:24.251427    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.4222481s
	I0310 20:31:24.251589    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	I0310 20:31:24.256385    1312 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.256385    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	I0310 20:31:24.257561    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.4328343s
	I0310 20:31:24.257914    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	I0310 20:31:24.269295    1312 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.269543    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	I0310 20:31:24.270100    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.4467183s
	I0310 20:31:24.270100    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	I0310 20:31:24.271807    1312 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.276922    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	I0310 20:31:24.277552    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 1.4498607s
	I0310 20:31:24.277552    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	I0310 20:31:24.283538    1312 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.284163    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	I0310 20:31:24.285009    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.458776s
	I0310 20:31:24.285623    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	I0310 20:31:24.312023    1312 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.312946    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	I0310 20:31:24.313474    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.4904309s
	I0310 20:31:24.313474    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	I0310 20:31:24.318497    1312 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.319180    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	I0310 20:31:24.319549    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.4958274s
	I0310 20:31:24.319549    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	I0310 20:31:24.321024    1312 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.321024    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	I0310 20:31:24.321024    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.4982067s
	I0310 20:31:24.322134    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	I0310 20:31:24.323121    1312 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.323377    1312 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.323702    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	I0310 20:31:24.324223    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	I0310 20:31:24.324223    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 1.5008409s
	I0310 20:31:24.324223    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.499213s
	I0310 20:31:24.324547    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	I0310 20:31:24.324547    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	I0310 20:31:24.327732    1312 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.328015    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	I0310 20:31:24.328504    1312 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:24.328956    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.5019284s
	I0310 20:31:24.328956    1312 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	I0310 20:31:24.328956    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	I0310 20:31:24.329706    1312 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.5046967s
	I0310 20:31:24.329926    1312 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	I0310 20:31:24.329926    1312 cache.go:73] Successfully saved all images to host disk.
	I0310 20:31:24.362526    1312 cli_runner.go:115] Run: docker container inspect offline-docker-20210310201637-6496 --format={{.State.Status}}
	I0310 20:31:24.657320    1312 cli_runner.go:168] Completed: docker container inspect offline-docker-20210310201637-6496 --format={{.State.Status}}: (1.3277677s)
	I0310 20:31:24.665623    1312 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 20:31:24.666045    1312 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 20:31:24.666264    1312 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0310 20:31:24.677412    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:31:24.704074    1312 cli_runner.go:168] Completed: docker container inspect offline-docker-20210310201637-6496 --format={{.State.Status}}: (1.3567345s)
	I0310 20:31:24.716463    1312 kapi.go:59] client config for offline-docker-20210310201637-6496: &rest.Config{Host:"https://127.0.0.1:55089", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\offline-docker-20210310201637-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\offline-docker-20210310201637-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgen
t:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 20:31:25.024682    1312 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:31:25.032663    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:31:25.372207    1312 addons.go:134] Setting addon default-storageclass=true in "offline-docker-20210310201637-6496"
	W0310 20:31:25.372991    1312 addons.go:143] addon default-storageclass should already be in state true
	I0310 20:31:25.373367    1312 host.go:66] Checking if "offline-docker-20210310201637-6496" exists ...
	I0310 20:31:25.395605    1312 cli_runner.go:115] Run: docker container inspect offline-docker-20210310201637-6496 --format={{.State.Status}}
	I0310 20:31:25.406996    1312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55106 SSHKeyPath:C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa Username:docker}
	I0310 20:31:25.672744    1312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55106 SSHKeyPath:C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa Username:docker}
	I0310 20:31:25.811193    1312 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (2.4768932s)
	I0310 20:31:25.822869    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:31:26.028555    1312 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	I0310 20:31:26.028555    1312 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0310 20:31:26.043380    1312 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20210310201637-6496
	I0310 20:31:26.518120    1312 kapi.go:59] client config for offline-docker-20210310201637-6496: &rest.Config{Host:"https://127.0.0.1:55089", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\offline-docker-20210310201637-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\offline-docker-20210310201637-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgen
t:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 20:31:26.542584    1312 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	I0310 20:31:26.543915    1312 pod_ready.go:59] waiting 6m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	I0310 20:31:26.642825    1312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55106 SSHKeyPath:C:\Users\jenkins\.minikube\machines\offline-docker-20210310201637-6496\id_rsa Username:docker}
	I0310 20:31:27.918403    1312 pod_ready.go:102] pod "coredns-74ff55c5b-mx7ng" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 20:31:09 +0000 GMT Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 20:31:29.399057    1312 pod_ready.go:102] pod "coredns-74ff55c5b-mx7ng" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 20:31:09 +0000 GMT Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 20:31:30.515486    1312 pod_ready.go:102] pod "coredns-74ff55c5b-mx7ng" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 20:31:09 +0000 GMT Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 20:31:31.671923    1312 pod_ready.go:102] pod "coredns-74ff55c5b-mx7ng" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 20:31:09 +0000 GMT Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 20:31:34.014720    1312 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 20:31:36.756227    1312 pod_ready.go:102] pod "coredns-74ff55c5b-mx7ng" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 20:31:09 +0000 GMT Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:57: out/minikube-windows-amd64.exe start -p offline-docker-20210310201637-6496 --alsologtostderr -v=1 --memory=2000 --wait=true --driver=docker failed: exit status 1

                                                
                                                
=== CONT  TestOffline
panic.go:617: *** TestOffline FAILED at 2021-03-10 20:31:38.5276283 +0000 GMT m=+5238.228429201
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect offline-docker-20210310201637-6496
helpers_test.go:231: (dbg) docker inspect offline-docker-20210310201637-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e6e271d4591670d3071ae65709a121901b8d0cb045c8520d44ca366142492b5",
	        "Created": "2021-03-10T20:16:56.3325919Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 124150,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:17:01.33656Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/4e6e271d4591670d3071ae65709a121901b8d0cb045c8520d44ca366142492b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e6e271d4591670d3071ae65709a121901b8d0cb045c8520d44ca366142492b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e6e271d4591670d3071ae65709a121901b8d0cb045c8520d44ca366142492b5/hosts",
	        "LogPath": "/var/lib/docker/containers/4e6e271d4591670d3071ae65709a121901b8d0cb045c8520d44ca366142492b5/4e6e271d4591670d3071ae65709a121901b8d0cb045c8520d44ca366142492b5-json.log",
	        "Name": "/offline-docker-20210310201637-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "offline-docker-20210310201637-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2097152000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2097152000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/18ead9160c8b8fea406fb21b3dc0b8a946dc0977e1649be359cd8c3b50aa5a1c-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/18ead9160c8b8fea406fb21b3dc0b8a946dc0977e1649be359cd8c3b50aa5a1c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/18ead9160c8b8fea406fb21b3dc0b8a946dc0977e1649be359cd8c3b50aa5a1c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/18ead9160c8b8fea406fb21b3dc0b8a946dc0977e1649be359cd8c3b50aa5a1c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "offline-docker-20210310201637-6496",
	                "Source": "/var/lib/docker/volumes/offline-docker-20210310201637-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "offline-docker-20210310201637-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "offline-docker-20210310201637-6496",
	                "name.minikube.sigs.k8s.io": "offline-docker-20210310201637-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f71e845110ae810dfebdaaf782cc96bb89e60104186db7d51efc5c2910a18c7a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55106"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55102"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55085"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55089"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f71e845110ae",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "5e8b91a51b8aa5369b2cbef84f6b2c51c02387e199bd09d8d01c2650d9b5c74a",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "5e8b91a51b8aa5369b2cbef84f6b2c51c02387e199bd09d8d01c2650d9b5c74a",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20210310201637-6496 -n offline-docker-20210310201637-6496
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-20210310201637-6496 -n offline-docker-20210310201637-6496: (24.3297108s)
helpers_test.go:240: <<< TestOffline FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestOffline]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p offline-docker-20210310201637-6496 logs -n 25

                                                
                                                
=== CONT  TestOffline
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p offline-docker-20210310201637-6496 logs -n 25: (1m54.4434333s)
helpers_test.go:248: TestOffline logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:17:08 UTC, end at Wed 2021-03-10 20:33:17 UTC. --
	* Mar 10 20:24:48 offline-docker-20210310201637-6496 systemd[1]: Stopped Docker Application Container Engine.
	* Mar 10 20:24:48 offline-docker-20210310201637-6496 systemd[1]: Starting Docker Application Container Engine...
	* Mar 10 20:24:49 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:49.080063700Z" level=info msg="Starting up"
	* Mar 10 20:24:49 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:49.085950800Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:24:49 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:49.086044700Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:24:49 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:49.086091700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:24:49 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:49.086125400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:24:49 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:49.118894400Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:24:49 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:49.119040800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:24:49 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:49.119106000Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:24:49 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:49.119142000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:24:54 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:54.846678300Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 20:24:54 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:54.916980700Z" level=info msg="Loading containers: start."
	* Mar 10 20:24:56 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:56.804639200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 20:24:57 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:57.515614600Z" level=info msg="Loading containers: done."
	* Mar 10 20:24:57 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:57.716380000Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 20:24:57 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:57.716812500Z" level=info msg="Daemon has completed initialization"
	* Mar 10 20:24:57 offline-docker-20210310201637-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 20:24:58 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:58.297341900Z" level=info msg="API listen on [::]:2376"
	* Mar 10 20:24:58 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:24:58.418706100Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 20:28:39 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:28:39.416113000Z" level=info msg="ignoring event" container=6d1d17ab59142e91b6e405d0934d72410ab026894dbd133c173969f66e0a2415 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:29:52 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:29:51.999941800Z" level=error msg="Handler for GET /v1.40/containers/59ddbc26343d5df95a088903b8bcd9f3113a4a9ed08d65398e7e1e29d97d16b2/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	* Mar 10 20:29:52 offline-docker-20210310201637-6496 dockerd[746]: time="2021-03-10T20:29:52.044667200Z" level=error msg="Handler for GET /v1.40/containers/59ddbc26343d5df95a088903b8bcd9f3113a4a9ed08d65398e7e1e29d97d16b2/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	* Mar 10 20:29:52 offline-docker-20210310201637-6496 dockerd[746]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	* Mar 10 20:29:52 offline-docker-20210310201637-6496 dockerd[746]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 4ca81299d8be9       43154ddb57a83       16 seconds ago      Created             kube-proxy                0                   b74b203f0b157
	* 59ddbc26343d5       a27166429d98e       4 minutes ago       Running             kube-controller-manager   1                   0afa21e4b0b62
	* 1e1b29b1125ed       ed2c44fbdd78b       6 minutes ago       Running             kube-scheduler            0                   b6088b6c67259
	* d8ea6b0aed966       0369cf4303ffd       6 minutes ago       Running             etcd                      0                   ee83a1b44c195
	* 66f33c7c5c227       a8c2fdb8bf76e       6 minutes ago       Running             kube-apiserver            0                   33c343c67a145
	* 
	* ==> describe nodes <==
	* Name:               offline-docker-20210310201637-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=offline-docker-20210310201637-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=offline-docker-20210310201637-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T20_29_41_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 20:28:48 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  offline-docker-20210310201637-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 20:33:24 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 20:32:19 +0000   Wed, 10 Mar 2021 20:28:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 20:32:19 +0000   Wed, 10 Mar 2021 20:28:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 20:32:19 +0000   Wed, 10 Mar 2021 20:28:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 20:32:19 +0000   Wed, 10 Mar 2021 20:32:19 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.4
	*   Hostname:    offline-docker-20210310201637-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                d790789f-7be7-4ff4-8e83-77753111c09b
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (6 in total)
	*   Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-74ff55c5b-mx7ng                                       100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m29s
	*   kube-system                 etcd-offline-docker-20210310201637-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         4m10s
	*   kube-system                 kube-apiserver-offline-docker-20210310201637-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         82s
	*   kube-system                 kube-controller-manager-offline-docker-20210310201637-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         4m37s
	*   kube-system                 kube-proxy-x5fn8                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	*   kube-system                 kube-scheduler-offline-docker-20210310201637-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age    From     Message
	*   ----    ------                   ----   ----     -------
	*   Normal  Starting                 3m11s  kubelet  Starting kubelet.
	*   Normal  NodeHasSufficientMemory  2m59s  kubelet  Node offline-docker-20210310201637-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m59s  kubelet  Node offline-docker-20210310201637-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m59s  kubelet  Node offline-docker-20210310201637-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             2m42s  kubelet  Node offline-docker-20210310201637-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  87s    kubelet  Updated Node Allocatable limit across pods
	*   Normal  NodeReady                79s    kubelet  Node offline-docker-20210310201637-6496 status is now: NodeReady
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [d8ea6b0aed96] <==
	* 2021-03-10 20:31:12.727059 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3886" took too long (107.81ms) to execute
	* 2021-03-10 20:31:13.459794 W | etcdserver: request "header:<ID:912955418950842472 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-74ff55c5b-mx7ng\" mod_revision:421 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-74ff55c5b-mx7ng\" value_size:3756 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-74ff55c5b-mx7ng\" > >>" with result "size:16" took too long (166.5948ms) to execute
	* 2021-03-10 20:31:13.868655 W | etcdserver: request "header:<ID:912955418950842474 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" mod_revision:391 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" value_size:3598 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" > >>" with result "size:16" took too long (408.3246ms) to execute
	* 2021-03-10 20:31:14.089028 W | etcdserver: request "header:<ID:912955418950842477 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:428 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:145 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >>" with result "size:16" took too long (219.8511ms) to execute
	* 2021-03-10 20:31:14.176903 W | etcdserver: read-only range request "key:\"/registry/endpointslices/kube-system/kube-dns-d6klh\" " with result "range_response_count:1 size:1018" took too long (1.0245038s) to execute
	* 2021-03-10 20:31:14.219603 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:129" took too long (1.038043s) to execute
	* 2021-03-10 20:31:25.192870 W | etcdserver: read-only range request "key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" " with result "range_response_count:0 size:5" took too long (137.7842ms) to execute
	* 2021-03-10 20:31:25.627174 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (540.057ms) to execute
	* 2021-03-10 20:31:25.835895 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-74ff55c5b-pgqq8\" " with result "range_response_count:1 size:3841" took too long (485.1931ms) to execute
	* 2021-03-10 20:31:28.160512 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true " with result "range_response_count:0 size:7" took too long (121.5408ms) to execute
	* 2021-03-10 20:31:28.673604 W | etcdserver: request "header:<ID:912955418950842546 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:457 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3848 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>" with result "size:16" took too long (184.5767ms) to execute
	* 2021-03-10 20:31:28.690544 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:482" took too long (662.5532ms) to execute
	* 2021-03-10 20:31:29.187169 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:5 size:23541" took too long (109.0408ms) to execute
	* 2021-03-10 20:32:18.050540 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:32:26.433433 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (244.1387ms) to execute
	* 2021-03-10 20:32:26.434086 W | etcdserver: request "header:<ID:912955418950842726 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-74ff55c5b-mx7ng.166b1554f1a897c4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-74ff55c5b-mx7ng.166b1554f1a897c4\" value_size:703 lease:912955418950842663 >> failure:<>>" with result "size:16" took too long (174.4833ms) to execute
	* 2021-03-10 20:32:27.530279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:32:37.699382 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:32:42.067539 W | etcdserver: request "header:<ID:912955418950842774 > lease_revoke:<id:0cab781dd2553967>" with result "size:28" took too long (261.159ms) to execute
	* 2021-03-10 20:32:47.993455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:32:57.262006 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:33:07.182677 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:33:17.317092 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:33:27.544282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:33:37.226584 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  20:33:45 up  1:34,  0 users,  load average: 223.96, 165.40, 82.05
	* Linux offline-docker-20210310201637-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [66f33c7c5c22] <==
	* Trace[225112925]: ---"Object stored in database" 464ms (20:31:00.807)
	* Trace[225112925]: [503.6735ms] [503.6735ms] END
	* I0310 20:31:28.744180       1 trace.go:205] Trace[937971379]: "Get" url:/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes,user-agent:kube-apiserver/v1.20.2 (linux/amd64) kubernetes/faecb19,client:127.0.0.1 (10-Mar-2021 20:31:28.010) (total time: 733ms):
	* Trace[937971379]: ---"About to write a response" 733ms (20:31:00.744)
	* Trace[937971379]: [733.6437ms] [733.6437ms] END
	* I0310 20:31:28.781454       1 trace.go:205] Trace[1514619049]: "GuaranteedUpdate etcd3" type:*apps.Deployment (10-Mar-2021 20:31:28.141) (total time: 639ms):
	* Trace[1514619049]: ---"Transaction committed" 541ms (20:31:00.777)
	* Trace[1514619049]: [639.9591ms] [639.9591ms] END
	* I0310 20:31:28.785744       1 trace.go:205] Trace[1773644254]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.20.2 (linux/amd64) kubernetes/faecb19/system:serviceaccount:kube-system:deployment-controller,client:172.17.0.4 (10-Mar-2021 20:31:28.135) (total time: 650ms):
	* Trace[1773644254]: ---"Object stored in database" 640ms (20:31:00.781)
	* Trace[1773644254]: [650.1631ms] [650.1631ms] END
	* I0310 20:31:30.857852       1 client.go:360] parsed scheme: "passthrough"
	* I0310 20:31:30.857955       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 20:31:30.857983       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 20:32:06.229866       1 client.go:360] parsed scheme: "passthrough"
	* I0310 20:32:06.230018       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 20:32:06.230059       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 20:32:22.851412       1 trace.go:205] Trace[1558118761]: "Create" url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:172.17.0.4 (10-Mar-2021 20:32:22.297) (total time: 553ms):
	* Trace[1558118761]: [553.8667ms] [553.8667ms] END
	* I0310 20:32:43.105030       1 client.go:360] parsed scheme: "passthrough"
	* I0310 20:32:43.148381       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 20:32:43.148418       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 20:33:20.231522       1 client.go:360] parsed scheme: "passthrough"
	* I0310 20:33:20.231613       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 20:33:20.231630       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [59ddbc26343d] <==
	* I0310 20:30:58.630834       1 shared_informer.go:247] Caches are synced for crt configmap 
	* I0310 20:30:58.678490       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	* I0310 20:30:58.844826       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	* I0310 20:30:58.844963       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	* I0310 20:30:58.865317       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	* I0310 20:30:58.865332       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	* I0310 20:30:58.865363       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	* I0310 20:30:58.960700       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 20:30:58.960725       1 disruption.go:339] Sending events to api server.
	* I0310 20:30:58.974464       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 20:30:58.974587       1 shared_informer.go:247] Caches are synced for ReplicationController 
	* I0310 20:30:58.975246       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 20:30:59.194912       1 range_allocator.go:373] Set node offline-docker-20210310201637-6496 PodCIDR to [10.244.0.0/24]
	* I0310 20:31:00.437536       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 20:31:01.238843       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 20:31:01.255749       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 20:31:01.255781       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 20:31:02.723540       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	* I0310 20:31:06.229449       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 20:31:06.537811       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-x5fn8"
	* I0310 20:31:09.951616       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-mx7ng"
	* I0310 20:31:12.592674       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-pgqq8"
	* I0310 20:31:24.422167       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	* I0310 20:31:26.443727       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-pgqq8"
	* I0310 20:32:21.421626       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-scheduler [1e1b29b1125e] <==
	* E0310 20:28:50.497262       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:28:50.640927       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:28:50.642654       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:28:50.672486       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:28:50.699841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:28:50.754616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:28:52.503819       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:28:53.749648       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:28:53.908342       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:28:54.229270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 20:28:54.296729       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:28:55.057604       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:28:55.110443       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:28:55.360528       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 20:28:55.711778       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:28:55.718632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:28:55.973909       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:28:56.302883       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:29:02.118795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 20:29:02.749405       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:29:03.030832       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:29:03.481449       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:29:03.513915       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:29:03.794957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* I0310 20:29:18.907084       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:17:08 UTC, end at Wed 2021-03-10 20:33:53 UTC. --
	* Mar 10 20:32:15 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:15.589260    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/1f88e73e0957c52588eac812ede8d219-etcd-certs") pod "etcd-offline-docker-20210310201637-6496" (UID: "1f88e73e0957c52588eac812ede8d219")
	* Mar 10 20:32:15 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:15.647681    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/1f88e73e0957c52588eac812ede8d219-etcd-data") pod "etcd-offline-docker-20210310201637-6496" (UID: "1f88e73e0957c52588eac812ede8d219")
	* Mar 10 20:32:15 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:15.683676    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/c76849c2d663f4f6107534876fa4a989-ca-certs") pod "kube-apiserver-offline-docker-20210310201637-6496" (UID: "c76849c2d663f4f6107534876fa4a989")
	* Mar 10 20:32:15 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:15.856601    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/c76849c2d663f4f6107534876fa4a989-etc-ca-certificates") pod "kube-apiserver-offline-docker-20210310201637-6496" (UID: "c76849c2d663f4f6107534876fa4a989")
	* Mar 10 20:32:15 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:15.903473    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/c76849c2d663f4f6107534876fa4a989-k8s-certs") pod "kube-apiserver-offline-docker-20210310201637-6496" (UID: "c76849c2d663f4f6107534876fa4a989")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.004601    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-etc-ca-certificates") pod "kube-controller-manager-offline-docker-20210310201637-6496" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.010307    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-usr-share-ca-certificates") pod "kube-controller-manager-offline-docker-20210310201637-6496" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.010579    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-k8s-certs") pod "kube-controller-manager-offline-docker-20210310201637-6496" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.029535    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/99015d8a-0b93-42a8-b8d0-0f35646b5bc6-kube-proxy") pod "kube-proxy-x5fn8" (UID: "99015d8a-0b93-42a8-b8d0-0f35646b5bc6")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.030077    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/99015d8a-0b93-42a8-b8d0-0f35646b5bc6-lib-modules") pod "kube-proxy-x5fn8" (UID: "99015d8a-0b93-42a8-b8d0-0f35646b5bc6")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.051421    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-7vzgm" (UniqueName: "kubernetes.io/secret/99015d8a-0b93-42a8-b8d0-0f35646b5bc6-kube-proxy-token-7vzgm") pod "kube-proxy-x5fn8" (UID: "99015d8a-0b93-42a8-b8d0-0f35646b5bc6")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.060701    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-flexvolume-dir") pod "kube-controller-manager-offline-docker-20210310201637-6496" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.083157    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-kubeconfig") pod "kube-controller-manager-offline-docker-20210310201637-6496" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.083410    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/99015d8a-0b93-42a8-b8d0-0f35646b5bc6-xtables-lock") pod "kube-proxy-x5fn8" (UID: "99015d8a-0b93-42a8-b8d0-0f35646b5bc6")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.083613    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c76849c2d663f4f6107534876fa4a989-usr-local-share-ca-certificates") pod "kube-apiserver-offline-docker-20210310201637-6496" (UID: "c76849c2d663f4f6107534876fa4a989")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.266432    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c76849c2d663f4f6107534876fa4a989-usr-share-ca-certificates") pod "kube-apiserver-offline-docker-20210310201637-6496" (UID: "c76849c2d663f4f6107534876fa4a989")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.266538    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-ca-certs") pod "kube-controller-manager-offline-docker-20210310201637-6496" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.266620    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-usr-local-share-ca-certificates") pod "kube-controller-manager-offline-docker-20210310201637-6496" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 20:32:16 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:16.266660    3302 reconciler.go:157] Reconciler: start to sync state
	* Mar 10 20:32:23 offline-docker-20210310201637-6496 kubelet[3302]: E0310 20:32:23.140010    3302 kubelet.go:1638] Failed creating a mirror pod for "kube-apiserver-offline-docker-20210310201637-6496_kube-system(c76849c2d663f4f6107534876fa4a989)": pods "kube-apiserver-offline-docker-20210310201637-6496" already exists
	* Mar 10 20:32:31 offline-docker-20210310201637-6496 kubelet[3302]: W0310 20:32:31.958483    3302 pod_container_deletor.go:79] Container "b74b203f0b15777fe928eb8cfd1f02029b1bc2fd6e472bf367ed6a45ced03c58" not found in pod's containers
	* Mar 10 20:32:32 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:32.207319    3302 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 20:32:32 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:32.984094    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9647eb35-f64a-494c-8829-591e71844132-config-volume") pod "coredns-74ff55c5b-mx7ng" (UID: "9647eb35-f64a-494c-8829-591e71844132")
	* Mar 10 20:32:33 offline-docker-20210310201637-6496 kubelet[3302]: I0310 20:32:33.243975    3302 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-lqh66" (UniqueName: "kubernetes.io/secret/9647eb35-f64a-494c-8829-591e71844132-coredns-token-lqh66") pod "coredns-74ff55c5b-mx7ng" (UID: "9647eb35-f64a-494c-8829-591e71844132")
	* Mar 10 20:32:59 offline-docker-20210310201637-6496 kubelet[3302]: W0310 20:32:59.438576    3302 pod_container_deletor.go:79] Container "d32035ad7eea1892ebae75af1982787c4f06930c858ceb8ae9a98b6d19173bd5" not found in pod's containers
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| profile | list --output json                       | minikube                                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:51:15 GMT | Wed, 10 Mar 2021 19:51:17 GMT |
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:51:18 GMT | Wed, 10 Mar 2021 19:51:21 GMT |
	|         | node stop m03                            |                                          |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:51:33 GMT | Wed, 10 Mar 2021 19:52:13 GMT |
	|         | node start m03                           |                                          |                         |         |                               |                               |
	|         | --alsologtostderr                        |                                          |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:52:21 GMT | Wed, 10 Mar 2021 19:52:39 GMT |
	|         | node delete m03                          |                                          |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:52:44 GMT | Wed, 10 Mar 2021 19:53:02 GMT |
	|         | stop                                     |                                          |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:59:04 GMT | Wed, 10 Mar 2021 19:59:20 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| start   | -p                                       | multinode-20210310194323-6496-m03        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:59:27 GMT | Wed, 10 Mar 2021 20:02:27 GMT |
	|         | multinode-20210310194323-6496-m03        |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	| delete  | -p                                       | multinode-20210310194323-6496-m03        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:02:30 GMT | Wed, 10 Mar 2021 20:02:41 GMT |
	|         | multinode-20210310194323-6496-m03        |                                          |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:02:45 GMT | Wed, 10 Mar 2021 20:02:59 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:03:05 GMT | Wed, 10 Mar 2021 20:03:22 GMT |
	|         | multinode-20210310194323-6496            |                                          |                         |         |                               |                               |
	| start   | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:03:23 GMT | Wed, 10 Mar 2021 20:06:49 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |                         |         |                               |                               |
	|         | --wait=true --preload=false              |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0             |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:06:50 GMT | Wed, 10 Mar 2021 20:06:54 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | -- docker pull busybox                   |                                          |                         |         |                               |                               |
	| start   | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:06:54 GMT | Wed, 10 Mar 2021 20:08:51 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |                         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker         |                                          |                         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:08:51 GMT | Wed, 10 Mar 2021 20:08:54 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | -- docker images                         |                                          |                         |         |                               |                               |
	| delete  | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:08:54 GMT | Wed, 10 Mar 2021 20:09:05 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:09:06 GMT | Wed, 10 Mar 2021 20:11:51 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --memory=1900 --driver=docker            |                                          |                         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:52 GMT | Wed, 10 Mar 2021 20:11:54 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --schedule 5m                            |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:57 GMT | Wed, 10 Mar 2021 20:11:59 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | -- sudo systemctl show                   |                                          |                         |         |                               |                               |
	|         | minikube-scheduled-stop --no-page        |                                          |                         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:00 GMT | Wed, 10 Mar 2021 20:12:02 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --schedule 5s                            |                                          |                         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:26 GMT | Wed, 10 Mar 2021 20:12:35 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	| start   | -p                                       | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:37 GMT | Wed, 10 Mar 2021 20:15:24 GMT |
	|         | skaffold-20210310201235-6496             |                                          |                         |         |                               |                               |
	|         | --memory=2600 --driver=docker            |                                          |                         |         |                               |                               |
	| -p      | skaffold-20210310201235-6496             | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:15:28 GMT | Wed, 10 Mar 2021 20:15:41 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:15:46 GMT | Wed, 10 Mar 2021 20:15:57 GMT |
	|         | skaffold-20210310201235-6496             |                                          |                         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20210310201557-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:29 GMT | Wed, 10 Mar 2021 20:16:37 GMT |
	|         | insufficient-storage-20210310201557-6496 |                                          |                         |         |                               |                               |
	| delete  | -p pause-20210310201637-6496             | pause-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:24 GMT | Wed, 10 Mar 2021 20:32:49 GMT |
	|---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 20:32:49
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 20:32:49.989452   10404 out.go:239] Setting OutFile to fd 1656 ...
	* I0310 20:32:49.991441   10404 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 20:32:49.991441   10404 out.go:252] Setting ErrFile to fd 1884...
	* I0310 20:32:49.991441   10404 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 20:32:49.998461   10404 out.go:246] Setting JSON to false
	* I0310 20:32:50.009072   10404 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":33835,"bootTime":1615374534,"procs":122,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 20:32:50.009072   10404 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 20:32:50.015068   10404 out.go:129] * [cert-options-20210310203249-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 20:32:50.018066   10404 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 20:32:50.023670   10404 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 20:32:50.558759   10404 docker.go:119] docker version: linux-20.10.2
	* I0310 20:32:50.569171   10404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:32:51.560904   10404 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:107 OomKillDisable:true NGoroutines:94 SystemTime:2021-03-10 20:32:51.1217951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:32:47.885715    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.4769761s)
	* I0310 20:32:47.886127    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:47.886712    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.3986991s)
	* I0310 20:32:47.887712    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:47.978268    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.4443509s)
	* I0310 20:32:47.978268    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.106559    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6996221s)
	* I0310 20:32:48.108560    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.189393    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6627443s)
	* I0310 20:32:48.190062    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.218506    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6868339s)
	* I0310 20:32:48.219034    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.307999    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7813513s)
	* I0310 20:32:48.308791    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.309589    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7663884s)
	* I0310 20:32:48.310019    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.319750    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7141714s)
	* I0310 20:32:48.320459    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.337368    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7937415s)
	* I0310 20:32:48.337518    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.345236    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.802036s)
	* I0310 20:32:48.345736    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.382186    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7876169s)
	* I0310 20:32:48.382876    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.392086    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8115707s)
	* I0310 20:32:48.392857    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.429707    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8464154s)
	* I0310 20:32:48.429956    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.436193    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9035253s)
	* I0310 20:32:48.436861    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.471534    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9152563s)
	* I0310 20:32:48.472040    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9403678s)
	* I0310 20:32:48.474443    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.474943    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.479574    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9486279s)
	* I0310 20:32:48.479574    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.490101    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8940986s)
	* I0310 20:32:48.490461    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.492302    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9566242s)
	* I0310 20:32:48.492573    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.542464    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0987331s)
	* I0310 20:32:48.543122    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.556502    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9477091s)
	* I0310 20:32:48.557155    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:48.559617    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9977489s)
	* I0310 20:32:48.559913    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:32:51.563882   10404 out.go:129] * Using the docker driver based on user configuration
	* I0310 20:32:51.563882   10404 start.go:276] selected driver: docker
	* I0310 20:32:51.563882   10404 start.go:718] validating driver "docker" against <nil>
	* I0310 20:32:51.563882   10404 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 20:32:53.510906   10404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:32:54.504209   10404 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:107 OomKillDisable:true NGoroutines:94 SystemTime:2021-03-10 20:32:54.0616979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:32:54.504895   10404 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 20:32:54.505387   10404 start_flags.go:699] Wait components to verify : map[apiserver:true system_pods:true]
	* I0310 20:32:54.505387   10404 cni.go:74] Creating CNI manager for ""
	* I0310 20:32:54.505387   10404 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 20:32:54.505387   10404 start_flags.go:398] config:
	* {Name:cert-options-20210310203249-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:cert-options-20210310203249-6496 Namespace:default APIServerName:localhost APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 20:32:54.509055   10404 out.go:129] * Starting control plane node cert-options-20210310203249-6496 in cluster cert-options-20210310203249-6496
	* I0310 20:32:55.155553   10404 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 20:32:55.155553   10404 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 20:32:55.156148   10404 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 20:32:55.156148   10404 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 20:32:55.156148   10404 cache.go:54] Caching tarball of preloaded images
	* I0310 20:32:55.156806   10404 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 20:32:55.156806   10404 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 20:32:55.156806   10404 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\cert-options-20210310203249-6496\config.json ...
	* I0310 20:32:55.157237   10404 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\cert-options-20210310203249-6496\config.json: {Name:mkc676091ea5c4db66a2a7573c325cd7d570aa92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:32:55.172479   10404 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 20:32:55.172905   10404 start.go:313] acquiring machines lock for cert-options-20210310203249-6496: {Name:mkf574da8f56551e07e112b108156e409065c71d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:32:55.172905   10404 start.go:317] acquired machines lock for "cert-options-20210310203249-6496" in 0s
	* I0310 20:32:55.172905   10404 start.go:89] Provisioning new machine with config: &{Name:cert-options-20210310203249-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:cert-options-20210310203249-6496 Namespace:default APIServerName:localhost AP
IServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8555 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 20:32:55.173429   10404 start.go:126] createHost starting for "" (driver="docker")
	* W0310 20:32:57.532360    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:32:57.532360    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 20:32:57.532360    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	* I0310 20:32:55.178099   10404 out.go:150] * Creating docker container (CPUs=2, Memory=1900MB) ...
	* I0310 20:32:55.179159   10404 start.go:160] libmachine.API.Create for "cert-options-20210310203249-6496" (driver="docker")
	* I0310 20:32:55.179159   10404 client.go:168] LocalClient.Create starting
	* I0310 20:32:55.179831   10404 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	* I0310 20:32:55.179831   10404 main.go:121] libmachine: Decoding PEM data...
	* I0310 20:32:55.180209   10404 main.go:121] libmachine: Parsing certificate...
	* I0310 20:32:55.180582   10404 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	* I0310 20:32:55.180582   10404 main.go:121] libmachine: Decoding PEM data...
	* I0310 20:32:55.180582   10404 main.go:121] libmachine: Parsing certificate...
	* I0310 20:32:55.213750   10404 cli_runner.go:115] Run: docker network inspect cert-options-20210310203249-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 20:32:55.763593   10404 cli_runner.go:162] docker network inspect cert-options-20210310203249-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 20:32:55.771963   10404 network_create.go:240] running [docker network inspect cert-options-20210310203249-6496] to gather additional debugging logs...
	* I0310 20:32:55.772313   10404 cli_runner.go:115] Run: docker network inspect cert-options-20210310203249-6496
	* W0310 20:32:56.338439   10404 cli_runner.go:162] docker network inspect cert-options-20210310203249-6496 returned with exit code 1
	* I0310 20:32:56.338439   10404 network_create.go:243] error running [docker network inspect cert-options-20210310203249-6496]: docker network inspect cert-options-20210310203249-6496: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: cert-options-20210310203249-6496
	* I0310 20:32:56.338439   10404 network_create.go:245] output of [docker network inspect cert-options-20210310203249-6496]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: cert-options-20210310203249-6496
	* 
	* ** /stderr **
	* I0310 20:32:56.350802   10404 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 20:32:56.986898   10404 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 20:32:56.986898   10404 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: cert-options-20210310203249-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 20:32:56.995081   10404 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20210310203249-6496
	* W0310 20:32:57.629319   10404 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20210310203249-6496 returned with exit code 1
	* W0310 20:32:57.630297   10404 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	* I0310 20:32:57.650956   10404 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 20:32:58.293467   10404 cli_runner.go:115] Run: docker volume create cert-options-20210310203249-6496 --label name.minikube.sigs.k8s.io=cert-options-20210310203249-6496 --label created_by.minikube.sigs.k8s.io=true
	* I0310 20:32:58.921124   10404 oci.go:102] Successfully created a docker volume cert-options-20210310203249-6496
	* I0310 20:32:58.928825   10404 cli_runner.go:115] Run: docker run --rm --name cert-options-20210310203249-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20210310203249-6496 --entrypoint /usr/bin/test -v cert-options-20210310203249-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* W0310 20:32:58.779029    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:32:58.779489    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 20:32:58.779703    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	* W0310 20:32:59.429833    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:32:59.430217    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 20:32:59.430217    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	* W0310 20:33:00.305148    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:33:00.306266    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 20:33:00.306495    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	* W0310 20:33:00.799167    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:33:00.799991    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 20:33:00.799991    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024 (4096 bytes)
	* I0310 20:33:03.032893   10404 cli_runner.go:168] Completed: docker run --rm --name cert-options-20210310203249-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20210310203249-6496 --entrypoint /usr/bin/test -v cert-options-20210310203249-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (4.1040755s)
	* I0310 20:33:03.032893   10404 oci.go:106] Successfully prepared a docker volume cert-options-20210310203249-6496
	* I0310 20:33:03.033322   10404 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 20:33:03.033682   10404 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 20:33:03.033682   10404 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 20:33:03.042796   10404 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20210310203249-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* I0310 20:33:03.042796   10404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* W0310 20:33:03.754806   10404 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20210310203249-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	* I0310 20:33:03.754806   10404 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20210310203249-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	* stdout:
	* 
	* stderr:
	* docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	* 
	* The notification platform is unavailable.
	* 	���
	* 
	* ���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	*    at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	* �������?8
	* CreateToastNotifier
	* Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	* Windows.UI.Notifications.ToastNotificationManager
	* Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	* ���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	* ���+The notification platform is unavailable.
	* 	������������RestrictedErrorReference
	* 	
���
���������RestrictedCapabilitySid
	* 	������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	* See 'docker run --help'.
	* I0310 20:33:04.065960   10404 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0231665s)
	* I0310 20:33:04.065960   10404 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:108 OomKillDisable:true NGoroutines:95 SystemTime:2021-03-10 20:33:03.613781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:33:04.076130   10404 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* W0310 20:33:06.840464    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:33:06.840464    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 20:33:06.840464    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	* W0310 20:33:06.840969    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:33:06.841408    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 20:33:06.841408    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464 (4096 bytes)
	* I0310 20:33:05.133601   10404 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.0574736s)
	* I0310 20:33:05.151202   10404 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-20210310203249-6496 --name cert-options-20210310203249-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20210310203249-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-20210310203249-6496 --volume cert-options-20210310203249-6496:/var --security-opt apparmor=unconfined --memory=1900mb --memory-swap=1900mb --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 20:33:08.530330    7164 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210115023213-8464: (1m7.3685899s)
	* I0310 20:33:10.812366    8464 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (29.3010904s)
	* I0310 20:33:10.824404    8464 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	* I0310 20:33:10.961234    8464 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 20:33:11.818763    8464 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 20:33:11.833514    8464 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 20:33:12.107569    8464 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 20:33:12.107730    8464 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 20:33:11.696818   10404 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-20210310203249-6496 --name cert-options-20210310203249-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20210310203249-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-20210310203249-6496 --volume cert-options-20210310203249-6496:/var --security-opt apparmor=unconfined --memory=1900mb --memory-swap=1900mb --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (6.5456287s)
	* I0310 20:33:11.720240   10404 cli_runner.go:115] Run: docker container inspect cert-options-20210310203249-6496 --format=
	* I0310 20:33:12.429316   10404 cli_runner.go:115] Run: docker container inspect cert-options-20210310203249-6496 --format=
	* I0310 20:33:13.037219   10404 cli_runner.go:115] Run: docker exec cert-options-20210310203249-6496 stat /var/lib/dpkg/alternatives/iptables
	* I0310 20:33:13.994372   10404 oci.go:278] the created container "cert-options-20210310203249-6496" has a running status.
	* I0310 20:33:13.994372   10404 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\cert-options-20210310203249-6496\id_rsa...
	* I0310 20:33:14.232264   10404 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\cert-options-20210310203249-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 20:33:15.250977   10404 cli_runner.go:115] Run: docker container inspect cert-options-20210310203249-6496 --format=
	* I0310 20:33:15.882009   10404 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	* I0310 20:33:15.882009   10404 kic_runner.go:115] Args: [docker exec --privileged cert-options-20210310203249-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	* I0310 20:33:17.289052   10404 kic_runner.go:124] Done: [docker exec --privileged cert-options-20210310203249-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.4070458s)
	* I0310 20:33:17.296006   10404 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\cert-options-20210310203249-6496\id_rsa...
	* I0310 20:33:18.083807   10404 cli_runner.go:115] Run: docker container inspect cert-options-20210310203249-6496 --format=
	* I0310 20:33:18.680753   10404 machine.go:88] provisioning docker machine ...
	* I0310 20:33:18.680998   10404 ubuntu.go:169] provisioning hostname "cert-options-20210310203249-6496"
	* I0310 20:33:18.695471   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	* I0310 20:33:19.263878   10404 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:33:19.279583   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}
	* I0310 20:33:19.279583   10404 main.go:121] libmachine: About to run SSH command:
	* sudo hostname cert-options-20210310203249-6496 && echo "cert-options-20210310203249-6496" | sudo tee /etc/hostname
	* I0310 20:33:23.642808    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210308233820-5396: (3m50.3568991s)
	* I0310 20:33:23.643075    7808 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3m51.1551442s)
	* I0310 20:33:23.643268    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210119220838-6552: (3m50.3515666s)
	* I0310 20:33:20.480069   10404 main.go:121] libmachine: SSH cmd err, output: <nil>: cert-options-20210310203249-6496
	* 
	* I0310 20:33:20.494578   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	* I0310 20:33:21.147042   10404 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:33:21.147458   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}
	* I0310 20:33:21.147458   10404 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\scert-options-20210310203249-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-20210310203249-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 cert-options-20210310203249-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 20:33:21.845110   10404 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 20:33:21.845110   10404 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 20:33:21.845110   10404 ubuntu.go:177] setting up certificates
	* I0310 20:33:21.845110   10404 provision.go:83] configureAuth start
	* I0310 20:33:21.864017   10404 cli_runner.go:115] Run: docker container inspect -f "" cert-options-20210310203249-6496
	* I0310 20:33:22.474222   10404 provision.go:137] copyHostCerts
	* I0310 20:33:22.475495   10404 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 20:33:22.475495   10404 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 20:33:22.475930   10404 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 20:33:22.479482   10404 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 20:33:22.479482   10404 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 20:33:22.479848   10404 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 20:33:22.490219   10404 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 20:33:22.490219   10404 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 20:33:22.490881   10404 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 20:33:22.500421   10404 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.cert-options-20210310203249-6496 san=[172.17.0.5 127.0.0.1 localhost 127.0.0.1 minikube cert-options-20210310203249-6496]
	* I0310 20:33:22.697186   10404 provision.go:165] copyRemoteCerts
	* I0310 20:33:22.713224   10404 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 20:33:22.722583   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	* I0310 20:33:23.371237   10404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55123 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cert-options-20210310203249-6496\id_rsa Username:docker}
	* I0310 20:33:23.649460    7808 out.go:129] * Enabled addons: default-storageclass
	* I0310 20:33:23.649460    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210213143925-7440: (3m50.0442552s)
	* I0310 20:33:23.643441    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210120175851-7432: (3m50.2962424s)
	* I0310 20:33:23.643803    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210212145109-352: (3m50.2727647s)
	* I0310 20:33:23.649460    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: (3m49.6119299s)
	* I0310 20:33:23.643803    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210303214129-4588: (3m50.2286351s)
	* I0310 20:33:23.644111    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210115023213-8464: (3m50.2306725s)
	* I0310 20:33:23.649661    7808 addons.go:383] enableAddons completed in 3m57.2078049s
	* I0310 20:33:23.644273    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210301195830-5700: (3m50.293211s)
	* I0310 20:33:23.649661    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492': No such file or directory
	* I0310 20:33:23.644273    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210310083645-5040: (3m50.1735205s)
	* I0310 20:33:23.649661    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210309234032-4944: (3m49.8944162s)
	* I0310 20:33:23.644449    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210224014800-800: (3m50.130294s)
	* I0310 20:33:23.649890    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	* I0310 20:33:23.644449    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210120231122-7024: (3m50.1292998s)
	* I0310 20:33:23.650026    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: (3m49.4746583s)
	* I0310 20:33:23.644449    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210128021318-232: (3m50.0178521s)
	* I0310 20:33:23.650153    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512': No such file or directory
	* I0310 20:33:23.644781    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: (3m45.0994287s)
	* I0310 20:33:23.644999    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: (3m49.5583691s)
	* I0310 20:33:23.644999    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210220004129-7452: (3m50.0484932s)
	* I0310 20:33:23.650153    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516': No such file or directory
	* I0310 20:33:23.650153    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748': No such file or directory
	* I0310 20:33:23.645606    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: (3m39.9985012s)
	* I0310 20:33:23.650153    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352': No such file or directory
	* I0310 20:33:23.645606    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (3m39.8844792s)
	* I0310 20:33:23.650153    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	* I0310 20:33:23.650153    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040': No such file or directory
	* I0310 20:33:23.650153    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352 (4096 bytes)
	* I0310 20:33:23.650153    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	* I0310 20:33:23.650153    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	* I0310 20:33:23.645606    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: (3m39.8844792s)
	* I0310 20:33:23.650485    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140': No such file or directory
	* I0310 20:33:23.650485    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040 (4096 bytes)
	* I0310 20:33:23.650485    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	* I0310 20:33:23.645849    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: (3m39.8808246s)
	* I0310 20:33:23.650814    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372': No such file or directory
	* I0310 20:33:23.645849    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210306072141-12056: (3m50.0245603s)
	* I0310 20:33:23.651049    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372 (4096 bytes)
	* I0310 20:33:23.646145    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (3m40.1329792s)
	* I0310 20:33:23.651049    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800': No such file or directory
	* I0310 20:33:23.646341    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: (3m40.129932s)
	* I0310 20:33:23.651049    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800 (4096 bytes)
	* I0310 20:33:23.646341    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: (3m40.1260405s)
	* I0310 20:33:23.651049    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920': No such file or directory
	* I0310 20:33:23.651049    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496': No such file or directory
	* I0310 20:33:23.646341    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: (3m40.0623027s)
	* I0310 20:33:23.651049    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	* I0310 20:33:23.651049    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396': No such file or directory
	* I0310 20:33:23.646703    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (3m40.1093106s)
	* I0310 20:33:23.651503    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	* I0310 20:33:23.651503    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552': No such file or directory
	* I0310 20:33:23.646863    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (3m49.5006482s)
	* I0310 20:33:23.651503    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396 (4096 bytes)
	* I0310 20:33:23.651503    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984': No such file or directory
	* I0310 20:33:23.651503    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552 (4096 bytes)
	* I0310 20:33:23.647024    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210225231842-5736: (3m49.8784948s)
	* I0310 20:33:23.647024    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: (3m49.5872587s)
	* I0310 20:33:23.647342    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210219220622-3920: (3m49.8020001s)
	* I0310 20:33:23.651503    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	* I0310 20:33:23.647342    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210114204234-6692: (3m49.7527588s)
	* I0310 20:33:23.651503    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856': No such file or directory
	* I0310 20:33:23.647681    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210310191609-6496: (3m49.8696733s)
	* I0310 20:33:23.647681    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (3m49.5092458s)
	* I0310 20:33:23.647681    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210126212539-5172: (3m49.9009973s)
	* I0310 20:33:23.651803    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	* I0310 20:33:23.651803    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088': No such file or directory
	* I0310 20:33:23.648062    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210304002630-1156: (3m49.9687888s)
	* I0310 20:33:23.648062    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210304184021-4052: (3m49.7531884s)
	* I0310 20:33:23.652081    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	* I0310 20:33:23.648359    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210219145454-9520: (3m49.8931145s)
	* I0310 20:33:23.648677    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210120214442-10992: (3m50.08395s)
	* I0310 20:33:23.648677    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210123004019-5372: (3m49.9406608s)
	* I0310 20:33:23.649234    7808 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: (3m49.6118163s)
	* I0310 20:33:23.649234    7808 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210120022529-1140: (3m50.0660696s)
	* I0310 20:33:23.652465    7808 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160': No such file or directory
	* I0310 20:33:23.652603    7808 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	* W0310 20:33:23.669624    7808 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:23.670011    7808 retry.go:31] will retry after 199.270641ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:23.670411    7808 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:23.670411    7808 retry.go:31] will retry after 313.143259ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:23.670411    7808 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:23.670411    7808 retry.go:31] will retry after 176.645665ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:23.670411    7808 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:23.670411    7808 retry.go:31] will retry after 341.333754ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:23.670411    7808 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:23.670411    7808 retry.go:31] will retry after 299.179792ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:23.670411    7808 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:23.670411    7808 retry.go:31] will retry after 255.955077ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:23.670411    7808 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 20:33:23.670411    7808 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:23.670411    7808 retry.go:31] will retry after 132.07577ms: ssh: rejected: connect failed (open failed)
	* I0310 20:33:23.670411    7808 retry.go:31] will retry after 164.582069ms: ssh: rejected: connect failed (open failed)
	* I0310 20:33:23.814837    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	* I0310 20:33:23.858228    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	* I0310 20:33:23.860553    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	* I0310 20:33:23.900870    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	* I0310 20:33:23.934719    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	* I0310 20:33:23.986584    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	* I0310 20:33:24.003246    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	* I0310 20:33:24.039287    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	* I0310 20:33:24.642617    7808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55099 SSHKeyPath:C:\Users\jenkins\.minikube\machines\nospam-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:24.734301    7808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55099 SSHKeyPath:C:\Users\jenkins\.minikube\machines\nospam-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:24.744707    7808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55099 SSHKeyPath:C:\Users\jenkins\.minikube\machines\nospam-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:24.826240    7808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55099 SSHKeyPath:C:\Users\jenkins\.minikube\machines\nospam-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:24.831848    7808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55099 SSHKeyPath:C:\Users\jenkins\.minikube\machines\nospam-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:24.868978    7808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55099 SSHKeyPath:C:\Users\jenkins\.minikube\machines\nospam-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:24.918446    7808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55099 SSHKeyPath:C:\Users\jenkins\.minikube\machines\nospam-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:24.944198    7808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55099 SSHKeyPath:C:\Users\jenkins\.minikube\machines\nospam-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:23.804429    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:33:23.814146    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:33:26.020510    7164 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210225231842-5736: (1m24.6291663s)
	* I0310 20:33:26.021356    7164 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210115191024-3516: (1m24.7275232s)
	* I0310 20:33:26.021356    7164 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210212145109-352: (1m24.7836673s)
	* I0310 20:33:26.020510    7164 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210123004019-5372: (1m24.5905679s)
	* I0310 20:33:26.698165    7164 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210120231122-7024: (1m25.4382904s)
	* I0310 20:33:26.698165    7164 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210304002630-1156: (1m25.3068222s)
	* I0310 20:33:26.713832    7164 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210310083645-5040: (1m25.4447615s)
	* I0310 20:33:28.505023   10404 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.7918107s)
	* I0310 20:33:28.505956   10404 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 20:33:29.278512   10404 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	* I0310 20:33:29.646678   10404 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	* I0310 20:33:32.155785    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	* I0310 20:33:32.168512    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	* I0310 20:33:28.140507    7164 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210126212539-5172: (1m26.8373215s)
	* I0310 20:33:28.140507    7164 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210304184021-4052: (1m26.7634091s)
	* I0310 20:33:28.140908    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (1m22.3446847s)
	* I0310 20:33:28.140908    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (1m20.2796374s)
	* I0310 20:33:28.140908    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440 (4096 bytes)
	* I0310 20:33:28.140908    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (1m19.9907041s)
	* I0310 20:33:28.140908    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	* I0310 20:33:28.141456    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (1m20.1120107s)
	* I0310 20:33:28.141456    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944 (4096 bytes)
	* I0310 20:33:28.141456    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: (1m19.9869119s)
	* I0310 20:33:28.141994    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (1m20.126941s)
	* I0310 20:33:28.141994    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: (1m20.1151886s)
	* I0310 20:33:28.141994    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (1m16.8404222s)
	* I0310 20:33:28.141994    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800 (4096 bytes)
	* I0310 20:33:28.142486    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588 (4096 bytes)
	* I0310 20:33:28.142486    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	* I0310 20:33:28.142486    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: (1m20.040944s)
	* I0310 20:33:28.140908    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700 (4096 bytes)
	* I0310 20:33:28.141994    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520 (4096 bytes)
	* I0310 20:33:28.142486    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (1m20.1072398s)
	* I0310 20:33:28.142486    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (1m17.0123963s)
	* I0310 20:33:28.142486    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992 (4096 bytes)
	* I0310 20:33:28.141456    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: (1m20.109303s)
	* I0310 20:33:28.142486    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: (1m20.2861818s)
	* I0310 20:33:28.142486    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040 (4096 bytes)
	* I0310 20:33:28.142486    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552 (4096 bytes)
	* I0310 20:33:28.142922    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (1m16.8876939s)
	* I0310 20:33:28.141456    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: (1m20.2246564s)
	* I0310 20:33:28.142922    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	* I0310 20:33:28.142922    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	* I0310 20:33:28.141994    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (1m16.9527654s)
	* I0310 20:33:28.141994    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: (1m20.0180183s)
	* I0310 20:33:28.151389    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	* I0310 20:33:28.142922    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	* I0310 20:33:28.142922    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	* I0310 20:33:28.142922    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052 (4096 bytes)
	* W0310 20:33:28.179536    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:28.179536    7164 retry.go:31] will retry after 276.165072ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:28.180026    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 20:33:28.180026    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:28.180026    7164 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	* I0310 20:33:28.180026    7164 retry.go:31] will retry after 291.140013ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:28.180338    7164 retry.go:31] will retry after 234.428547ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:28.180798    7164 retry.go:31] will retry after 231.159374ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:28.181072    7164 retry.go:31] will retry after 141.409254ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:28.181316    7164 retry.go:31] will retry after 164.129813ms: ssh: rejected: connect failed (open failed)
	* W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:33:28.181316    7164 retry.go:31] will retry after 149.242379ms: ssh: rejected: connect failed (open failed)
	* I0310 20:33:28.180798    7164 retry.go:31] will retry after 296.705768ms: ssh: rejected: connect failed (open failed)
	* I0310 20:33:28.334511    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	* I0310 20:33:28.340463    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	* I0310 20:33:28.359901    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	* I0310 20:33:28.423340    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	* I0310 20:33:28.423340    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	* I0310 20:33:28.469280    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	* I0310 20:33:28.496050    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	* I0310 20:33:28.520697    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	* I0310 20:33:28.559931    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	* I0310 20:33:29.257542    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:29.313487    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:29.314782    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:29.345908    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:29.405877    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:29.417399    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:29.431183    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:29.448450    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:29.504625    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:33:30.017702   10404 provision.go:86] duration metric: configureAuth took 8.172607s
	* I0310 20:33:30.017702   10404 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 20:33:30.026933   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	* I0310 20:33:30.687397   10404 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:33:30.688173   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}
	* I0310 20:33:30.688173   10404 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 20:33:31.168639   10404 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 20:33:31.168639   10404 ubuntu.go:71] root file system type: overlay
	* I0310 20:33:31.169432   10404 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 20:33:31.179470   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	* I0310 20:33:31.742514   10404 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:33:31.742884   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}
	* I0310 20:33:31.742884   10404 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 20:33:32.258533   10404 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 20:33:32.266020   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	* I0310 20:33:32.862706   10404 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:33:32.863338   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}
	* I0310 20:33:32.863338   10404 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 20:33:49.554961    7808 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: (1m20.8455664s)
	* I0310 20:33:49.554961    7808 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 from cache
	* I0310 20:33:49.554961    7808 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:33:49.569664    7808 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:33:52.233878   10404 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 20:33:32.244220000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 20:33:52.233878   10404 machine.go:91] provisioned docker machine in 33.5531891s
	* I0310 20:33:52.233878   10404 client.go:171] LocalClient.Create took 57.0548299s
	* I0310 20:33:52.233878   10404 start.go:168] duration metric: libmachine.API.Create for "cert-options-20210310203249-6496" took 57.0548299s
	* I0310 20:33:52.233878   10404 start.go:267] post-start starting for "cert-options-20210310203249-6496" (driver="docker")
	* I0310 20:33:52.233878   10404 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 20:33:52.244365   10404 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 20:33:52.251874   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	* I0310 20:33:52.919439   10404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55123 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cert-options-20210310203249-6496\id_rsa Username:docker}
	* I0310 20:33:53.284563   10404 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0396742s)
	* I0310 20:33:53.294246   10404 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 20:33:53.329060   10404 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 20:33:53.329060   10404 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 20:33:53.329060   10404 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 20:33:53.329060   10404 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 20:33:53.329060   10404 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 20:33:53.329781   10404 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 20:33:53.332616   10404 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 20:33:53.341315   10404 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 20:33:53.354265   10404 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 20:33:53.413734   10404 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 20:33:53.738403   10404 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 20:33:53.962739   10404 start.go:270] post-start completed in 1.7288644s
	* I0310 20:33:54.006391   10404 cli_runner.go:115] Run: docker container inspect -f "" cert-options-20210310203249-6496

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:33:45.606341   13148 out.go:340] unable to execute * 2021-03-10 20:31:13.459794 W | etcdserver: request "header:<ID:912955418950842472 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-74ff55c5b-mx7ng\" mod_revision:421 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-74ff55c5b-mx7ng\" value_size:3756 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-74ff55c5b-mx7ng\" > >>" with result "size:16" took too long (166.5948ms) to execute
	: html/template:* 2021-03-10 20:31:13.459794 W | etcdserver: request "header:<ID:912955418950842472 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-74ff55c5b-mx7ng\" mod_revision:421 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-74ff55c5b-mx7ng\" value_size:3756 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-74ff55c5b-mx7ng\" > >>" with result "size:16" took too long (166.5948ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:33:45.625447   13148 out.go:340] unable to execute * 2021-03-10 20:31:13.868655 W | etcdserver: request "header:<ID:912955418950842474 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" mod_revision:391 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" value_size:3598 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" > >>" with result "size:16" took too long (408.3246ms) to execute
	: html/template:* 2021-03-10 20:31:13.868655 W | etcdserver: request "header:<ID:912955418950842474 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" mod_revision:391 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" value_size:3598 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" > >>" with result "size:16" took too long (408.3246ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:33:45.633475   13148 out.go:340] unable to execute * 2021-03-10 20:31:14.089028 W | etcdserver: request "header:<ID:912955418950842477 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:428 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:145 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >>" with result "size:16" took too long (219.8511ms) to execute
	: html/template:* 2021-03-10 20:31:14.089028 W | etcdserver: request "header:<ID:912955418950842477 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:428 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:145 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >>" with result "size:16" took too long (219.8511ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:33:45.697054   13148 out.go:340] unable to execute * 2021-03-10 20:31:28.673604 W | etcdserver: request "header:<ID:912955418950842546 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:457 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3848 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>" with result "size:16" took too long (184.5767ms) to execute
	: html/template:* 2021-03-10 20:31:28.673604 W | etcdserver: request "header:<ID:912955418950842546 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:457 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3848 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>" with result "size:16" took too long (184.5767ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:33:45.735879   13148 out.go:340] unable to execute * 2021-03-10 20:32:26.434086 W | etcdserver: request "header:<ID:912955418950842726 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-74ff55c5b-mx7ng.166b1554f1a897c4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-74ff55c5b-mx7ng.166b1554f1a897c4\" value_size:703 lease:912955418950842663 >> failure:<>>" with result "size:16" took too long (174.4833ms) to execute
	: html/template:* 2021-03-10 20:32:26.434086 W | etcdserver: request "header:<ID:912955418950842726 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-74ff55c5b-mx7ng.166b1554f1a897c4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-74ff55c5b-mx7ng.166b1554f1a897c4\" value_size:703 lease:912955418950842663 >> failure:<>>" with result "size:16" took too long (174.4833ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:33:54.304455   13148 out.go:335] unable to parse "* I0310 20:32:50.569171   10404 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:32:50.569171   10404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:33:54.315819   13148 out.go:340] unable to execute * I0310 20:32:47.885715    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.4769761s)
	: template: * I0310 20:32:47.885715    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.4769761s)
	:1:102: executing "* I0310 20:32:47.885715    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.4769761s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.323800   13148 out.go:340] unable to execute * I0310 20:32:47.886712    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.3986991s)
	: template: * I0310 20:32:47.886712    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.3986991s)
	:1:102: executing "* I0310 20:32:47.886712    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.3986991s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.331796   13148 out.go:340] unable to execute * I0310 20:32:47.978268    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.4443509s)
	: template: * I0310 20:32:47.978268    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.4443509s)
	:1:102: executing "* I0310 20:32:47.978268    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.4443509s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.347160   13148 out.go:340] unable to execute * I0310 20:32:48.106559    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6996221s)
	: template: * I0310 20:32:48.106559    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6996221s)
	:1:102: executing "* I0310 20:32:48.106559    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.6996221s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.358082   13148 out.go:340] unable to execute * I0310 20:32:48.189393    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6627443s)
	: template: * I0310 20:32:48.189393    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6627443s)
	:1:102: executing "* I0310 20:32:48.189393    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.6627443s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.369063   13148 out.go:340] unable to execute * I0310 20:32:48.218506    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6868339s)
	: template: * I0310 20:32:48.218506    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6868339s)
	:1:102: executing "* I0310 20:32:48.218506    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.6868339s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.384411   13148 out.go:340] unable to execute * I0310 20:32:48.307999    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7813513s)
	: template: * I0310 20:32:48.307999    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7813513s)
	:1:102: executing "* I0310 20:32:48.307999    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.7813513s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.402933   13148 out.go:340] unable to execute * I0310 20:32:48.309589    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7663884s)
	: template: * I0310 20:32:48.309589    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7663884s)
	:1:102: executing "* I0310 20:32:48.309589    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.7663884s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.423620   13148 out.go:340] unable to execute * I0310 20:32:48.319750    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7141714s)
	: template: * I0310 20:32:48.319750    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7141714s)
	:1:102: executing "* I0310 20:32:48.319750    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.7141714s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.432758   13148 out.go:340] unable to execute * I0310 20:32:48.337368    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7937415s)
	: template: * I0310 20:32:48.337368    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7937415s)
	:1:102: executing "* I0310 20:32:48.337368    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.7937415s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.443757   13148 out.go:340] unable to execute * I0310 20:32:48.345236    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.802036s)
	: template: * I0310 20:32:48.345236    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.802036s)
	:1:102: executing "* I0310 20:32:48.345236    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.802036s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.467518   13148 out.go:340] unable to execute * I0310 20:32:48.382186    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7876169s)
	: template: * I0310 20:32:48.382186    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7876169s)
	:1:102: executing "* I0310 20:32:48.382186    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.7876169s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.478510   13148 out.go:340] unable to execute * I0310 20:32:48.392086    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8115707s)
	: template: * I0310 20:32:48.392086    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8115707s)
	:1:102: executing "* I0310 20:32:48.392086    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.8115707s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.489114   13148 out.go:340] unable to execute * I0310 20:32:48.429707    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8464154s)
	: template: * I0310 20:32:48.429707    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8464154s)
	:1:102: executing "* I0310 20:32:48.429707    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.8464154s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.508486   13148 out.go:340] unable to execute * I0310 20:32:48.436193    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9035253s)
	: template: * I0310 20:32:48.436193    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9035253s)
	:1:102: executing "* I0310 20:32:48.436193    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.9035253s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.519410   13148 out.go:340] unable to execute * I0310 20:32:48.471534    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9152563s)
	: template: * I0310 20:32:48.471534    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9152563s)
	:1:102: executing "* I0310 20:32:48.471534    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.9152563s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.527279   13148 out.go:340] unable to execute * I0310 20:32:48.472040    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9403678s)
	: template: * I0310 20:32:48.472040    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9403678s)
	:1:102: executing "* I0310 20:32:48.472040    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.9403678s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.541753   13148 out.go:340] unable to execute * I0310 20:32:48.479574    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9486279s)
	: template: * I0310 20:32:48.479574    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9486279s)
	:1:102: executing "* I0310 20:32:48.479574    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.9486279s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.552256   13148 out.go:340] unable to execute * I0310 20:32:48.490101    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8940986s)
	: template: * I0310 20:32:48.490101    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8940986s)
	:1:102: executing "* I0310 20:32:48.490101    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.8940986s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.566022   13148 out.go:340] unable to execute * I0310 20:32:48.492302    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9566242s)
	: template: * I0310 20:32:48.492302    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9566242s)
	:1:102: executing "* I0310 20:32:48.492302    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.9566242s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.583099   13148 out.go:340] unable to execute * I0310 20:32:48.542464    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0987331s)
	: template: * I0310 20:32:48.542464    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0987331s)
	:1:102: executing "* I0310 20:32:48.542464    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (2.0987331s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.595808   13148 out.go:340] unable to execute * I0310 20:32:48.556502    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9477091s)
	: template: * I0310 20:32:48.556502    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9477091s)
	:1:102: executing "* I0310 20:32:48.556502    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.9477091s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.615260   13148 out.go:340] unable to execute * I0310 20:32:48.559617    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9977489s)
	: template: * I0310 20:32:48.559617    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9977489s)
	:1:102: executing "* I0310 20:32:48.559617    6776 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" force-systemd-env-20210310201637-6496: (1.9977489s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.643889   13148 out.go:335] unable to parse "* I0310 20:32:53.510906   10404 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:32:53.510906   10404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:33:54.820612   13148 out.go:340] unable to execute * I0310 20:32:55.213750   10404 cli_runner.go:115] Run: docker network inspect cert-options-20210310203249-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 20:32:55.213750   10404 cli_runner.go:115] Run: docker network inspect cert-options-20210310203249-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:288: executing "* I0310 20:32:55.213750   10404 cli_runner.go:115] Run: docker network inspect cert-options-20210310203249-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.828616   13148 out.go:340] unable to execute * W0310 20:32:55.763593   10404 cli_runner.go:162] docker network inspect cert-options-20210310203249-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 20:32:55.763593   10404 cli_runner.go:162] docker network inspect cert-options-20210310203249-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:283: executing "* W0310 20:32:55.763593   10404 cli_runner.go:162] docker network inspect cert-options-20210310203249-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:54.902763   13148 out.go:340] unable to execute * I0310 20:32:56.350802   10404 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 20:32:56.350802   10404 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 20:32:56.350802   10404 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:55.015464   13148 out.go:335] unable to parse "* I0310 20:33:03.042796   10404 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:33:03.042796   10404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:33:55.246165   13148 out.go:335] unable to parse "* I0310 20:33:04.065960   10404 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0231665s)\n": template: * I0310 20:33:04.065960   10404 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0231665s)
	:1: function "json" not defined - returning raw string.
	E0310 20:33:55.258005   13148 out.go:335] unable to parse "* I0310 20:33:04.076130   10404 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 20:33:04.076130   10404 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 20:33:55.301510   13148 out.go:335] unable to parse "* I0310 20:33:05.133601   10404 cli_runner.go:168] Completed: docker info --format \"'{{json .SecurityOptions}}'\": (1.0574736s)\n": template: * I0310 20:33:05.133601   10404 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.0574736s)
	:1: function "json" not defined - returning raw string.
	E0310 20:33:55.459127   13148 out.go:340] unable to execute * I0310 20:33:18.695471   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	: template: * I0310 20:33:18.695471   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	:1:96: executing "* I0310 20:33:18.695471   10404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cert-options-20210310203249-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:55.471132   13148 out.go:335] unable to parse "* I0310 20:33:19.279583   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}\n": template: * I0310 20:33:19.279583   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:33:55.522539   13148 out.go:340] unable to execute * I0310 20:33:20.494578   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	: template: * I0310 20:33:20.494578   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	:1:96: executing "* I0310 20:33:20.494578   10404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cert-options-20210310203249-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:55.533284   13148 out.go:335] unable to parse "* I0310 20:33:21.147458   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}\n": template: * I0310 20:33:21.147458   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:33:55.654944   13148 out.go:340] unable to execute * I0310 20:33:22.722583   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	: template: * I0310 20:33:22.722583   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	:1:96: executing "* I0310 20:33:22.722583   10404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cert-options-20210310203249-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.272065   13148 out.go:340] unable to execute * I0310 20:33:23.814837    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	: template: * I0310 20:33:23.814837    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	:1:96: executing "* I0310 20:33:23.814837    7808 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" nospam-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.283062   13148 out.go:340] unable to execute * I0310 20:33:23.858228    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	: template: * I0310 20:33:23.858228    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	:1:96: executing "* I0310 20:33:23.858228    7808 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" nospam-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.295746   13148 out.go:340] unable to execute * I0310 20:33:23.860553    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	: template: * I0310 20:33:23.860553    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	:1:96: executing "* I0310 20:33:23.860553    7808 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" nospam-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.311287   13148 out.go:340] unable to execute * I0310 20:33:23.900870    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	: template: * I0310 20:33:23.900870    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	:1:96: executing "* I0310 20:33:23.900870    7808 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" nospam-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.319999   13148 out.go:340] unable to execute * I0310 20:33:23.934719    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	: template: * I0310 20:33:23.934719    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	:1:96: executing "* I0310 20:33:23.934719    7808 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" nospam-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.333819   13148 out.go:340] unable to execute * I0310 20:33:23.986584    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	: template: * I0310 20:33:23.986584    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	:1:96: executing "* I0310 20:33:23.986584    7808 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" nospam-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.339751   13148 out.go:340] unable to execute * I0310 20:33:24.003246    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	: template: * I0310 20:33:24.003246    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	:1:96: executing "* I0310 20:33:24.003246    7808 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" nospam-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.345752   13148 out.go:340] unable to execute * I0310 20:33:24.039287    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	: template: * I0310 20:33:24.039287    7808 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" nospam-20210310201637-6496
	:1:96: executing "* I0310 20:33:24.039287    7808 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" nospam-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.673870   13148 out.go:340] unable to execute * I0310 20:33:28.334511    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	: template: * I0310 20:33:28.334511    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	:1:96: executing "* I0310 20:33:28.334511    7164 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" docker-flags-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.682828   13148 out.go:340] unable to execute * I0310 20:33:28.340463    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	: template: * I0310 20:33:28.340463    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	:1:96: executing "* I0310 20:33:28.340463    7164 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" docker-flags-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.691300   13148 out.go:340] unable to execute * I0310 20:33:28.359901    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	: template: * I0310 20:33:28.359901    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	:1:96: executing "* I0310 20:33:28.359901    7164 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" docker-flags-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.698003   13148 out.go:340] unable to execute * I0310 20:33:28.423340    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	: template: * I0310 20:33:28.423340    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	:1:96: executing "* I0310 20:33:28.423340    7164 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" docker-flags-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.705658   13148 out.go:340] unable to execute * I0310 20:33:28.423340    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	: template: * I0310 20:33:28.423340    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	:1:96: executing "* I0310 20:33:28.423340    7164 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" docker-flags-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.712679   13148 out.go:340] unable to execute * I0310 20:33:28.469280    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	: template: * I0310 20:33:28.469280    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	:1:96: executing "* I0310 20:33:28.469280    7164 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" docker-flags-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.722420   13148 out.go:340] unable to execute * I0310 20:33:28.496050    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	: template: * I0310 20:33:28.496050    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	:1:96: executing "* I0310 20:33:28.496050    7164 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" docker-flags-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.730917   13148 out.go:340] unable to execute * I0310 20:33:28.520697    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	: template: * I0310 20:33:28.520697    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	:1:96: executing "* I0310 20:33:28.520697    7164 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" docker-flags-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.740968   13148 out.go:340] unable to execute * I0310 20:33:28.559931    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	: template: * I0310 20:33:28.559931    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	:1:96: executing "* I0310 20:33:28.559931    7164 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" docker-flags-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.793837   13148 out.go:340] unable to execute * I0310 20:33:30.026933   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	: template: * I0310 20:33:30.026933   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	:1:96: executing "* I0310 20:33:30.026933   10404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cert-options-20210310203249-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.806164   13148 out.go:335] unable to parse "* I0310 20:33:30.688173   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}\n": template: * I0310 20:33:30.688173   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:33:56.844811   13148 out.go:340] unable to execute * I0310 20:33:31.179470   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	: template: * I0310 20:33:31.179470   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	:1:96: executing "* I0310 20:33:31.179470   10404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cert-options-20210310203249-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:56.856716   13148 out.go:335] unable to parse "* I0310 20:33:31.742884   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}\n": template: * I0310 20:33:31.742884   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:33:57.345670   13148 out.go:340] unable to execute * I0310 20:33:32.266020   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	: template: * I0310 20:33:32.266020   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	:1:96: executing "* I0310 20:33:32.266020   10404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cert-options-20210310203249-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:33:57.360102   13148 out.go:335] unable to parse "* I0310 20:33:32.863338   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}\n": template: * I0310 20:33:32.863338   10404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55123 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:33:57.837141   13148 out.go:340] unable to execute * I0310 20:33:52.251874   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	: template: * I0310 20:33:52.251874   10404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20210310203249-6496
	:1:96: executing "* I0310 20:33:52.251874   10404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cert-options-20210310203249-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p offline-docker-20210310201637-6496 -n offline-docker-20210310201637-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p offline-docker-20210310201637-6496 -n offline-docker-20210310201637-6496: (15.9894865s)
helpers_test.go:257: (dbg) Run:  kubectl --context offline-docker-20210310201637-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:257: (dbg) Done: kubectl --context offline-docker-20210310201637-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (3.7832307s)
helpers_test.go:263: non-running pods: coredns-74ff55c5b-mx7ng
helpers_test.go:265: ======> post-mortem[TestOffline]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context offline-docker-20210310201637-6496 describe pod coredns-74ff55c5b-mx7ng
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context offline-docker-20210310201637-6496 describe pod coredns-74ff55c5b-mx7ng: exit status 1 (953.8225ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-74ff55c5b-mx7ng" not found

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context offline-docker-20210310201637-6496 describe pod coredns-74ff55c5b-mx7ng: exit status 1
helpers_test.go:171: Cleaning up "offline-docker-20210310201637-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20210310201637-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20210310201637-6496: (28.0834098s)
--- FAIL: TestOffline (1090.14s)

                                                
                                    
x
+
TestCertOptions (1147.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:46: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20210310203249-6496 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:46: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-options-20210310203249-6496 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: exit status 109 (17m46.7710913s)

                                                
                                                
-- stdout --
	* [cert-options-20210310203249-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	* Starting control plane node cert-options-20210310203249-6496 in cluster cert-options-20210310203249-6496
	* Creating docker container (CPUs=2, Memory=1900MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [cert-options-20210310203249-6496 localhost] and IPs [172.17.0.5 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [cert-options-20210310203249-6496 localhost] and IPs [172.17.0.5 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
cert_options_test.go:48: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-options-20210310203249-6496 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost" : exit status 109
cert_options_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20210310203249-6496 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20210310203249-6496 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (6.7625306s)
cert_options_test.go:72: (dbg) Run:  kubectl --context cert-options-20210310203249-6496 config view
cert_options_test.go:77: apiserver server port incorrect. Output of 'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters:\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Fri, 15 Jan 2021 21:34:16 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.16.0\n\t      name: cluster_info\n\t    server: https://127.0.0.1:55202\n\t  name: kubenet-20210115213025-3516\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\.minikube/ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Wed, 10 Mar 2021 20:43:43 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.18.1\n\t      name: cluster_info\n\t    server: https://127.0.0.1:55130\n\t  name: kubernetes-upgrade-20210310201637-6496\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins\\.minikube\\ca.crt\n\t    server: https://127.0.0.1:55080\n\t  name: missing-upgrade-20210310201637-6496\n
\tcontexts:\n\t- context:\n\t    cluster: kubenet-20210115213025-3516\n\t    extensions:\n\t    - extension:\n\t        last-update: Fri, 15 Jan 2021 21:34:16 GMT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.16.0\n\t      name: context_info\n\t    namespace: default\n\t    user: kubenet-20210115213025-3516\n\t  name: kubenet-20210115213025-3516\n\t- context:\n\t    cluster: kubernetes-upgrade-20210310201637-6496\n\t    user: kubernetes-upgrade-20210310201637-6496\n\t  name: kubernetes-upgrade-20210310201637-6496\n\t- context:\n\t    cluster: missing-upgrade-20210310201637-6496\n\t    user: missing-upgrade-20210310201637-6496\n\t  name: missing-upgrade-20210310201637-6496\n\tcurrent-context: kubernetes-upgrade-20210310201637-6496\n\tkind: Config\n\tpreferences: {}\n\tusers:\n\t- name: kubenet-20210115213025-3516\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\.minikube\\profiles\\kubenet-20210115213025-3516\\client.crt\n\t    client-key: C:\\Users\\jenkins\\.minikube\\profiles\\ku
benet-20210115213025-3516\\client.key\n\t- name: kubernetes-upgrade-20210310201637-6496\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\.minikube\\profiles\\kubernetes-upgrade-20210310201637-6496/client.crt\n\t    client-key: C:\\Users\\jenkins\\.minikube\\profiles\\kubernetes-upgrade-20210310201637-6496/client.key\n\t- name: missing-upgrade-20210310201637-6496\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins\\.minikube\\profiles\\missing-upgrade-20210310201637-6496\\client.crt\n\t    client-key: C:\\Users\\jenkins\\.minikube\\profiles\\missing-upgrade-20210310201637-6496\\client.key\n\n-- /stdout --"
cert_options_test.go:80: *** TestCertOptions FAILED at 2021-03-10 20:50:43.4127302 +0000 GMT m=+6383.115252801
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect cert-options-20210310203249-6496
helpers_test.go:231: (dbg) docker inspect cert-options-20210310203249-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6b78492332e8f7161ab40323f5bf6cb8ba4f0a6bc5d6c3a6d591a45ec4dde180",
	        "Created": "2021-03-10T20:33:05.7202191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 177865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:33:11.6009779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/6b78492332e8f7161ab40323f5bf6cb8ba4f0a6bc5d6c3a6d591a45ec4dde180/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6b78492332e8f7161ab40323f5bf6cb8ba4f0a6bc5d6c3a6d591a45ec4dde180/hostname",
	        "HostsPath": "/var/lib/docker/containers/6b78492332e8f7161ab40323f5bf6cb8ba4f0a6bc5d6c3a6d591a45ec4dde180/hosts",
	        "LogPath": "/var/lib/docker/containers/6b78492332e8f7161ab40323f5bf6cb8ba4f0a6bc5d6c3a6d591a45ec4dde180/6b78492332e8f7161ab40323f5bf6cb8ba4f0a6bc5d6c3a6d591a45ec4dde180-json.log",
	        "Name": "/cert-options-20210310203249-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": [
	            "59ec946ccaf43bd6aa9cbd7c1ce671a98a43a08f01fb7ac1aa12915d3f0b3d98"
	        ],
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "cert-options-20210310203249-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 1992294400,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 1992294400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b2c7a3130e4976a44a12b6b49c6aa2b8406eeaa37af4df3e4d0e9d9299e34ad-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b2c7a3130e4976a44a12b6b49c6aa2b8406eeaa37af4df3e4d0e9d9299e34ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b2c7a3130e4976a44a12b6b49c6aa2b8406eeaa37af4df3e4d0e9d9299e34ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b2c7a3130e4976a44a12b6b49c6aa2b8406eeaa37af4df3e4d0e9d9299e34ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "cert-options-20210310203249-6496",
	                "Source": "/var/lib/docker/volumes/cert-options-20210310203249-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "cert-options-20210310203249-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8555/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "cert-options-20210310203249-6496",
	                "name.minikube.sigs.k8s.io": "cert-options-20210310203249-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ad2280cab477f3276bb88db9a7b11820fca661ec1ccabf440b244d3d6f6a771",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55122"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55119"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55121"
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55120"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8ad2280cab47",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "df026bbbb290425fe7938e5a10bf8dfa36456bddc646ad0c10fab6a55600078c",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.5",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:05",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "df026bbbb290425fe7938e5a10bf8dfa36456bddc646ad0c10fab6a55600078c",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.5",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:05",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20210310203249-6496 -n cert-options-20210310203249-6496
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-20210310203249-6496 -n cert-options-20210310203249-6496: exit status 6 (25.6607593s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:50:53.786062   14820 status.go:396] kubeconfig endpoint: extract IP: "cert-options-20210310203249-6496" does not appear in C:\Users\jenkins/.kube/config
	E0310 20:51:09.763024   14820 status.go:405] Error apiserver status: https://localhost:55120/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 6 (may be ok)
helpers_test.go:237: "cert-options-20210310203249-6496" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:171: Cleaning up "cert-options-20210310203249-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20210310203249-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20210310203249-6496: (46.9948852s)
--- FAIL: TestCertOptions (1147.17s)

                                                
                                    
x
+
TestDockerFlags (1989.98s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20210310201637-6496 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p docker-flags-20210310201637-6496 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: exit status 1 (30m0.0558776s)

                                                
                                                
-- stdout --
	* [docker-flags-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	
	
	* Starting control plane node docker-flags-20210310201637-6496 in cluster docker-flags-20210310201637-6496
	* Creating docker container (CPUs=2, Memory=1800MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - opt debug
	  - opt icc=true
	  - env FOO=BAR
	  - env BAZ=BAT
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:16:38.012658    7164 out.go:239] Setting OutFile to fd 2744 ...
	I0310 20:16:38.015659    7164 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:16:38.015659    7164 out.go:252] Setting ErrFile to fd 2888...
	I0310 20:16:38.015659    7164 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:16:38.037691    7164 out.go:246] Setting JSON to false
	I0310 20:16:38.042806    7164 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":32863,"bootTime":1615374535,"procs":112,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:16:38.042806    7164 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:16:38.046650    7164 out.go:129] * [docker-flags-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:16:38.049655    7164 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:16:38.052654    7164 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:16:38.796062    7164 docker.go:119] docker version: linux-20.10.2
	I0310 20:16:38.818311    7164 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:16:40.041385    7164 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2230778s)
	I0310 20:16:40.046808    7164 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:52 SystemTime:2021-03-10 20:16:39.4855751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:40.050781    7164 out.go:129] * Using the docker driver based on user configuration
	I0310 20:16:40.050781    7164 start.go:276] selected driver: docker
	I0310 20:16:40.050781    7164 start.go:718] validating driver "docker" against <nil>
	I0310 20:16:40.050781    7164 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:16:41.264206    7164 out.go:129] 
	W0310 20:16:41.264961    7164 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	W0310 20:16:41.265493    7164 out.go:191] * Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	* Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0310 20:16:41.265825    7164 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* Documentation: https://docs.docker.com/docker-for-windows/#resources
	I0310 20:16:41.271121    7164 out.go:129] 
	I0310 20:16:41.300312    7164 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:16:42.450278    7164 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1499698s)
	I0310 20:16:42.451303    7164 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:56 SystemTime:2021-03-10 20:16:41.9390835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:42.452112    7164 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 20:16:42.453007    7164 start_flags.go:712] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0310 20:16:42.453007    7164 cni.go:74] Creating CNI manager for ""
	I0310 20:16:42.453007    7164 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:16:42.453007    7164 start_flags.go:398] config:
	{Name:docker-flags-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:docker-flags-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:16:42.457383    7164 out.go:129] * Starting control plane node docker-flags-20210310201637-6496 in cluster docker-flags-20210310201637-6496
	I0310 20:16:43.408595    7164 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:16:43.408595    7164 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:16:43.408595    7164 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:16:43.408941    7164 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:16:43.409941    7164 cache.go:54] Caching tarball of preloaded images
	I0310 20:16:43.409941    7164 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 20:16:43.409941    7164 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 20:16:43.411295    7164 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\config.json ...
	I0310 20:16:43.411719    7164 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\config.json: {Name:mkf7b2ac7a803f5f33815666a5905441a6db49b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:16:43.428056    7164 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:16:43.429332    7164 start.go:313] acquiring machines lock for docker-flags-20210310201637-6496: {Name:mk82cc1a290532c3d074995481f320f1edd8f93c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:16:43.429713    7164 start.go:317] acquired machines lock for "docker-flags-20210310201637-6496" in 219.3??s
	I0310 20:16:43.429713    7164 start.go:89] Provisioning new machine with config: &{Name:docker-flags-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:docker-flags-20210310201637-6496 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 20:16:43.430059    7164 start.go:126] createHost starting for "" (driver="docker")
	I0310 20:16:43.433447    7164 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	I0310 20:16:43.434694    7164 start.go:160] libmachine.API.Create for "docker-flags-20210310201637-6496" (driver="docker")
	I0310 20:16:43.435145    7164 client.go:168] LocalClient.Create starting
	I0310 20:16:43.435750    7164 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 20:16:43.435750    7164 main.go:121] libmachine: Decoding PEM data...
	I0310 20:16:43.435750    7164 main.go:121] libmachine: Parsing certificate...
	I0310 20:16:43.436463    7164 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 20:16:43.437217    7164 main.go:121] libmachine: Decoding PEM data...
	I0310 20:16:43.437217    7164 main.go:121] libmachine: Parsing certificate...
	I0310 20:16:43.494022    7164 cli_runner.go:115] Run: docker network inspect docker-flags-20210310201637-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 20:16:44.320680    7164 cli_runner.go:162] docker network inspect docker-flags-20210310201637-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 20:16:44.339954    7164 network_create.go:240] running [docker network inspect docker-flags-20210310201637-6496] to gather additional debugging logs...
	I0310 20:16:44.339954    7164 cli_runner.go:115] Run: docker network inspect docker-flags-20210310201637-6496
	W0310 20:16:45.267558    7164 cli_runner.go:162] docker network inspect docker-flags-20210310201637-6496 returned with exit code 1
	I0310 20:16:45.267811    7164 network_create.go:243] error running [docker network inspect docker-flags-20210310201637-6496]: docker network inspect docker-flags-20210310201637-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20210310201637-6496
	I0310 20:16:45.267811    7164 network_create.go:245] output of [docker network inspect docker-flags-20210310201637-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20210310201637-6496
	
	** /stderr **
	I0310 20:16:45.284998    7164 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:16:46.120309    7164 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 20:16:46.121286    7164 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: docker-flags-20210310201637-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 20:16:46.131289    7164 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20210310201637-6496
	W0310 20:16:46.838322    7164 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20210310201637-6496 returned with exit code 1
	W0310 20:16:46.838322    7164 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 20:16:46.857353    7164 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 20:16:47.655372    7164 cli_runner.go:115] Run: docker volume create docker-flags-20210310201637-6496 --label name.minikube.sigs.k8s.io=docker-flags-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 20:16:48.499122    7164 oci.go:102] Successfully created a docker volume docker-flags-20210310201637-6496
	I0310 20:16:48.519555    7164 cli_runner.go:115] Run: docker run --rm --name docker-flags-20210310201637-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20210310201637-6496 --entrypoint /usr/bin/test -v docker-flags-20210310201637-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 20:16:54.245526    7164 cli_runner.go:168] Completed: docker run --rm --name docker-flags-20210310201637-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20210310201637-6496 --entrypoint /usr/bin/test -v docker-flags-20210310201637-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (5.7257788s)
	I0310 20:16:54.245526    7164 oci.go:106] Successfully prepared a docker volume docker-flags-20210310201637-6496
	I0310 20:16:54.245982    7164 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:16:54.246316    7164 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:16:54.246520    7164 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 20:16:54.272604    7164 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	I0310 20:16:54.274343    7164 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W0310 20:16:55.107854    7164 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 20:16:55.107854    7164 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 20:16:55.475984    7164 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2016454s)
	I0310 20:16:55.476609    7164 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:49 SystemTime:2021-03-10 20:16:54.9570873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:55.494945    7164 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 20:16:56.547126    7164 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.0520326s)
	I0310 20:16:56.565007    7164 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20210310201637-6496 --name docker-flags-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20210310201637-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20210310201637-6496 --volume docker-flags-20210310201637-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 20:17:03.855779    7164 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20210310201637-6496 --name docker-flags-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20210310201637-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20210310201637-6496 --volume docker-flags-20210310201637-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (7.2903516s)
	I0310 20:17:03.883989    7164 cli_runner.go:115] Run: docker container inspect docker-flags-20210310201637-6496 --format={{.State.Running}}
	I0310 20:17:04.605502    7164 cli_runner.go:115] Run: docker container inspect docker-flags-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:05.274377    7164 cli_runner.go:115] Run: docker exec docker-flags-20210310201637-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 20:17:07.248588    7164 cli_runner.go:168] Completed: docker exec docker-flags-20210310201637-6496 stat /var/lib/dpkg/alternatives/iptables: (1.9742169s)
	I0310 20:17:07.248588    7164 oci.go:278] the created container "docker-flags-20210310201637-6496" has a running status.
	I0310 20:17:07.248588    7164 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa...
	I0310 20:17:07.478468    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0310 20:17:07.488838    7164 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 20:17:09.696644    7164 cli_runner.go:115] Run: docker container inspect docker-flags-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:10.415202    7164 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 20:17:10.415202    7164 kic_runner.go:115] Args: [docker exec --privileged docker-flags-20210310201637-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 20:17:12.146852    7164 kic_runner.go:124] Done: [docker exec --privileged docker-flags-20210310201637-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.7316546s)
	I0310 20:17:12.151055    7164 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa...
	I0310 20:17:13.058357    7164 cli_runner.go:115] Run: docker container inspect docker-flags-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:13.739762    7164 machine.go:88] provisioning docker machine ...
	I0310 20:17:13.739987    7164 ubuntu.go:169] provisioning hostname "docker-flags-20210310201637-6496"
	I0310 20:17:13.749124    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:14.497214    7164 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:14.509052    7164 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I0310 20:17:14.509329    7164 main.go:121] libmachine: About to run SSH command:
	sudo hostname docker-flags-20210310201637-6496 && echo "docker-flags-20210310201637-6496" | sudo tee /etc/hostname
	I0310 20:17:14.521637    7164 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 20:17:18.980887    7164 main.go:121] libmachine: SSH cmd err, output: <nil>: docker-flags-20210310201637-6496
	
	I0310 20:17:18.989854    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:19.681912    7164 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:19.681912    7164 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I0310 20:17:19.681912    7164 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdocker-flags-20210310201637-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 docker-flags-20210310201637-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 docker-flags-20210310201637-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:17:20.764850    7164 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:17:20.764850    7164 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:17:20.764850    7164 ubuntu.go:177] setting up certificates
	I0310 20:17:20.764850    7164 provision.go:83] configureAuth start
	I0310 20:17:20.777918    7164 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20210310201637-6496
	I0310 20:17:21.538392    7164 provision.go:137] copyHostCerts
	I0310 20:17:21.538753    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\key.pem -> C:\Users\jenkins\.minikube/key.pem
	I0310 20:17:21.539623    7164 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:17:21.540058    7164 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:17:21.545281    7164 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:17:21.551421    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> C:\Users\jenkins\.minikube/ca.pem
	I0310 20:17:21.551421    7164 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:17:21.551421    7164 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:17:21.551421    7164 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:17:21.551421    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\cert.pem -> C:\Users\jenkins\.minikube/cert.pem
	I0310 20:17:21.551421    7164 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:17:21.551421    7164 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:17:21.551421    7164 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:17:21.558602    7164 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.docker-flags-20210310201637-6496 san=[172.17.0.7 127.0.0.1 localhost 127.0.0.1 minikube docker-flags-20210310201637-6496]
	I0310 20:17:21.798593    7164 provision.go:165] copyRemoteCerts
	I0310 20:17:21.811657    7164 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:17:21.822295    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:22.496021    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:22.948565    7164 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.1367714s)
	I0310 20:17:22.948879    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0310 20:17:22.949117    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:17:23.136501    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0310 20:17:23.137111    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	I0310 20:17:23.350520    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0310 20:17:23.351183    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 20:17:23.636179    7164 provision.go:86] duration metric: configureAuth took 2.8713386s
	I0310 20:17:23.636404    7164 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:17:23.643598    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:24.270059    7164 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:24.270821    7164 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I0310 20:17:24.271343    7164 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:17:24.868905    7164 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:17:24.868905    7164 ubuntu.go:71] root file system type: overlay
	I0310 20:17:24.870046    7164 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:17:24.877352    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:25.426878    7164 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:25.427789    7164 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I0310 20:17:25.428007    7164 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="FOO=BAR"
	Environment="BAZ=BAT"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:17:26.010031    7164 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=FOO=BAR
	Environment=BAZ=BAT
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:17:26.021975    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:26.650834    7164 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:26.651439    7164 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I0310 20:17:26.651636    7164 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:17:34.502453    7164 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 20:17:25.985110000 +0000
	@@ -1,30 +1,34 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=FOO=BAR
	+Environment=BAZ=BAT
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +36,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 20:17:34.502453    7164 machine.go:91] provisioned docker machine in 20.7625308s
	I0310 20:17:34.502453    7164 client.go:171] LocalClient.Create took 51.0674729s
	I0310 20:17:34.502453    7164 start.go:168] duration metric: libmachine.API.Create for "docker-flags-20210310201637-6496" took 51.0679237s
	I0310 20:17:34.502453    7164 start.go:267] post-start starting for "docker-flags-20210310201637-6496" (driver="docker")
	I0310 20:17:34.502453    7164 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:17:34.525968    7164 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:17:34.532808    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:35.271235    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:35.755550    7164 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2291329s)
	I0310 20:17:35.775897    7164 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:17:35.881251    7164 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:17:35.881432    7164 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:17:35.881826    7164 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:17:35.882186    7164 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:17:35.882423    7164 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:17:35.883129    7164 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:17:35.888363    7164 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:17:35.888595    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> /etc/test/nested/copy/2512/hosts
	I0310 20:17:35.891362    7164 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:17:35.891362    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> /etc/test/nested/copy/4452/hosts
	I0310 20:17:35.910660    7164 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:17:36.018118    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:17:36.188644    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:17:36.344111    7164 start.go:270] post-start completed in 1.8416635s
	I0310 20:17:36.385011    7164 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20210310201637-6496
	I0310 20:17:37.170957    7164 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\config.json ...
	I0310 20:17:37.230866    7164 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:17:37.245520    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:37.954741    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:38.358135    7164 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1270569s)
	I0310 20:17:38.359420    7164 start.go:129] duration metric: createHost completed in 54.9295362s
	I0310 20:17:38.359654    7164 start.go:80] releasing machines lock for "docker-flags-20210310201637-6496", held for 54.9301166s
	I0310 20:17:38.378502    7164 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20210310201637-6496
	I0310 20:17:39.075163    7164 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:17:39.089641    7164 ssh_runner.go:149] Run: systemctl --version
	I0310 20:17:39.091400    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:39.105264    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:39.847514    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:39.865756    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:40.590856    7164 ssh_runner.go:189] Completed: systemctl --version: (1.500605s)
	I0310 20:17:40.597481    7164 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.5219291s)
	I0310 20:17:40.603282    7164 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:17:40.725274    7164 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:17:40.847919    7164 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:17:40.866556    7164 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:17:40.974158    7164 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:17:41.338584    7164 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:17:41.498520    7164 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:17:42.475148    7164 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 20:17:42.695884    7164 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:17:43.546932    7164 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 20:17:43.549763    7164 out.go:129]   - opt debug
	I0310 20:17:43.552966    7164 out.go:129]   - opt icc=true
	I0310 20:17:43.555998    7164 out.go:129]   - env FOO=BAR
	I0310 20:17:43.559313    7164 out.go:129]   - env BAZ=BAT
	I0310 20:17:43.569225    7164 cli_runner.go:115] Run: docker exec -t docker-flags-20210310201637-6496 dig +short host.docker.internal
	I0310 20:17:44.928111    7164 cli_runner.go:168] Completed: docker exec -t docker-flags-20210310201637-6496 dig +short host.docker.internal: (1.3588898s)
	I0310 20:17:44.928385    7164 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:17:44.943430    7164 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:17:45.027912    7164 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:17:45.148071    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:17:45.862206    7164 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\client.crt
	I0310 20:17:45.867122    7164 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\client.key
	I0310 20:17:45.872061    7164 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:17:45.872956    7164 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:17:45.879650    7164 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:17:46.508335    7164 docker.go:423] Got preloaded images: 
	I0310 20:17:46.508335    7164 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 20:17:46.527563    7164 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:17:46.671774    7164 ssh_runner.go:149] Run: which lz4
	I0310 20:17:46.798333    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0310 20:17:46.810037    7164 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 20:17:46.875747    7164 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 20:17:46.875943    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 20:24:08.280609    7164 docker.go:388] Took 381.482616 seconds to copy over tarball
	I0310 20:24:08.292377    7164 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 20:24:49.284579    7164 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (40.9920994s)
	I0310 20:24:49.284579    7164 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 20:24:50.788924    7164 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:24:50.836534    7164 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 20:24:50.932317    7164 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:24:51.377351    7164 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:24:58.301742    7164 ssh_runner.go:189] Completed: sudo systemctl restart docker: (6.9244345s)
	I0310 20:24:58.309100    7164 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 20:24:59.777756    7164 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (1.4686649s)
	I0310 20:24:59.778283    7164 cni.go:74] Creating CNI manager for ""
	I0310 20:24:59.778283    7164 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:24:59.778283    7164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 20:24:59.778283    7164 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.7 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:docker-flags-20210310201637-6496 NodeName:docker-flags-20210310201637-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.7 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 20:24:59.780263    7164 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "docker-flags-20210310201637-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 20:24:59.780760    7164 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=docker-flags-20210310201637-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:docker-flags-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 20:24:59.792533    7164 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 20:24:59.854538    7164 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 20:24:59.875836    7164 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 20:24:59.975934    7164 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I0310 20:25:00.178967    7164 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 20:25:00.346724    7164 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1858 bytes)
	I0310 20:25:00.693990    7164 ssh_runner.go:149] Run: grep 172.17.0.7	control-plane.minikube.internal$ /etc/hosts
	I0310 20:25:00.719550    7164 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:25:00.879952    7164 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496 for IP: 172.17.0.7
	I0310 20:25:00.880762    7164 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 20:25:00.881199    7164 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 20:25:00.882221    7164 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\client.key
	I0310 20:25:00.882221    7164 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.key.d9a465bc
	I0310 20:25:00.882710    7164 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.crt.d9a465bc with IP's: [172.17.0.7 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 20:25:01.305326    7164 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.crt.d9a465bc ...
	I0310 20:25:01.305326    7164 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.crt.d9a465bc: {Name:mkbf10dcb428b5e64fa539f5032f6fd04a98446b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:01.317348    7164 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.key.d9a465bc ...
	I0310 20:25:01.317348    7164 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.key.d9a465bc: {Name:mk17b981518d398c34c259e59f447b632dec9d65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:01.342785    7164 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.crt.d9a465bc -> C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.crt
	I0310 20:25:01.350346    7164 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.key.d9a465bc -> C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.key
	I0310 20:25:01.354458    7164 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\proxy-client.key
	I0310 20:25:01.354677    7164 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\proxy-client.crt with IP's: []
	I0310 20:25:01.804345    7164 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\proxy-client.crt ...
	I0310 20:25:01.804345    7164 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\proxy-client.crt: {Name:mk919ba43b14abac5d104abcb9a9bc0b019c5439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:01.834736    7164 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\proxy-client.key ...
	I0310 20:25:01.834736    7164 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\proxy-client.key: {Name:mke6a9d2ffd3a294221d8f2b0fefae5d51bb6009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:01.859141    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0310 20:25:01.859500    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0310 20:25:01.859500    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0310 20:25:01.860453    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0310 20:25:01.861076    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0310 20:25:01.861076    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0310 20:25:01.861712    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0310 20:25:01.861712    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0310 20:25:01.863266    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 20:25:01.863880    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.864136    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 20:25:01.864873    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.865148    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 20:25:01.865672    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.865935    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 20:25:01.866280    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.867255    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 20:25:01.867764    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.867764    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 20:25:01.868219    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.868367    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 20:25:01.868736    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.869015    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 20:25:01.869429    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.869748    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 20:25:01.870282    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.870282    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 20:25:01.870282    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.870873    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 20:25:01.871419    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.871419    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 20:25:01.872125    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.872735    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 20:25:01.873090    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.873398    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 20:25:01.873618    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.873806    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 20:25:01.873806    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.874303    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 20:25:01.874736    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.875042    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 20:25:01.875349    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.875349    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 20:25:01.875887    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.876192    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 20:25:01.876621    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.876621    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 20:25:01.877118    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.877118    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 20:25:01.877541    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.877948    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 20:25:01.877948    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.877948    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 20:25:01.877948    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.877948    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 20:25:01.879037    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.879037    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 20:25:01.879037    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.879037    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 20:25:01.879037    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.879037    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 20:25:01.880058    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.880058    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 20:25:01.880556    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.880556    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 20:25:01.881043    7164 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 20:25:01.881043    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 20:25:01.885586    7164 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 20:25:01.886029    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7432.pem -> /usr/share/ca-certificates/7432.pem
	I0310 20:25:01.886216    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\800.pem -> /usr/share/ca-certificates/800.pem
	I0310 20:25:01.886486    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7440.pem -> /usr/share/ca-certificates/7440.pem
	I0310 20:25:01.886732    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1156.pem -> /usr/share/ca-certificates/1156.pem
	I0310 20:25:01.886899    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7160.pem -> /usr/share/ca-certificates/7160.pem
	I0310 20:25:01.886899    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1476.pem -> /usr/share/ca-certificates/1476.pem
	I0310 20:25:01.887338    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3920.pem -> /usr/share/ca-certificates/3920.pem
	I0310 20:25:01.887797    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4452.pem -> /usr/share/ca-certificates/4452.pem
	I0310 20:25:01.888231    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1728.pem -> /usr/share/ca-certificates/1728.pem
	I0310 20:25:01.888369    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4944.pem -> /usr/share/ca-certificates/4944.pem
	I0310 20:25:01.889143    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:01.889247    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8748.pem -> /usr/share/ca-certificates/8748.pem
	I0310 20:25:01.889471    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\232.pem -> /usr/share/ca-certificates/232.pem
	I0310 20:25:01.889748    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6368.pem -> /usr/share/ca-certificates/6368.pem
	I0310 20:25:01.889748    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6692.pem -> /usr/share/ca-certificates/6692.pem
	I0310 20:25:01.890266    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1984.pem -> /usr/share/ca-certificates/1984.pem
	I0310 20:25:01.890842    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4588.pem -> /usr/share/ca-certificates/4588.pem
	I0310 20:25:01.891032    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\2512.pem -> /usr/share/ca-certificates/2512.pem
	I0310 20:25:01.891262    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9520.pem -> /usr/share/ca-certificates/9520.pem
	I0310 20:25:01.891678    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6856.pem -> /usr/share/ca-certificates/6856.pem
	I0310 20:25:01.891908    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6552.pem -> /usr/share/ca-certificates/6552.pem
	I0310 20:25:01.891908    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4052.pem -> /usr/share/ca-certificates/4052.pem
	I0310 20:25:01.892138    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7452.pem -> /usr/share/ca-certificates/7452.pem
	I0310 20:25:01.892338    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5700.pem -> /usr/share/ca-certificates/5700.pem
	I0310 20:25:01.892338    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3516.pem -> /usr/share/ca-certificates/3516.pem
	I0310 20:25:01.892727    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5040.pem -> /usr/share/ca-certificates/5040.pem
	I0310 20:25:01.893002    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5396.pem -> /usr/share/ca-certificates/5396.pem
	I0310 20:25:01.893214    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6492.pem -> /usr/share/ca-certificates/6492.pem
	I0310 20:25:01.893214    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9088.pem -> /usr/share/ca-certificates/9088.pem
	I0310 20:25:01.893493    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5172.pem -> /usr/share/ca-certificates/5172.pem
	I0310 20:25:01.893836    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5372.pem -> /usr/share/ca-certificates/5372.pem
	I0310 20:25:01.894166    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7024.pem -> /usr/share/ca-certificates/7024.pem
	I0310 20:25:01.894166    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3056.pem -> /usr/share/ca-certificates/3056.pem
	I0310 20:25:01.894166    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5736.pem -> /usr/share/ca-certificates/5736.pem
	I0310 20:25:01.894166    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\12056.pem -> /usr/share/ca-certificates/12056.pem
	I0310 20:25:01.894877    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\10992.pem -> /usr/share/ca-certificates/10992.pem
	I0310 20:25:01.895291    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1140.pem -> /usr/share/ca-certificates/1140.pem
	I0310 20:25:01.895430    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8464.pem -> /usr/share/ca-certificates/8464.pem
	I0310 20:25:01.895430    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\352.pem -> /usr/share/ca-certificates/352.pem
	I0310 20:25:01.895430    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6496.pem -> /usr/share/ca-certificates/6496.pem
	I0310 20:25:01.898328    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 20:25:02.211524    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0310 20:25:02.418291    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 20:25:02.664721    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\docker-flags-20210310201637-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 20:25:02.955612    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 20:25:03.207109    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 20:25:03.388475    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 20:25:03.580323    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 20:25:03.856532    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 20:25:04.128472    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 20:25:04.437884    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 20:25:04.786464    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 20:25:05.064503    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 20:25:05.348311    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 20:25:05.650651    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 20:25:05.805687    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 20:25:06.150730    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 20:25:06.491746    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 20:25:06.649148    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 20:25:06.931642    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 20:25:07.277718    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 20:25:07.628307    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 20:25:08.003926    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 20:25:08.320262    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 20:25:08.701014    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 20:25:08.946808    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 20:25:09.195865    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 20:25:09.559884    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 20:25:09.814666    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 20:25:10.127261    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 20:25:10.369878    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 20:25:10.660928    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 20:25:10.914371    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 20:25:11.140775    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 20:25:11.471385    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 20:25:11.750434    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 20:25:11.972534    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 20:25:12.356095    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 20:25:12.625231    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 20:25:12.949439    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 20:25:13.183412    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 20:25:13.351605    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 20:25:13.594120    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 20:25:13.869515    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 20:25:14.144791    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 20:25:14.282034    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 20:25:14.563964    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 20:25:14.836773    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 20:25:15.208398    7164 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 20:25:15.487385    7164 ssh_runner.go:149] Run: openssl version
	I0310 20:25:15.573803    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 20:25:15.719298    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 20:25:15.751518    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 20:25:15.765986    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 20:25:15.834355    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:15.979961    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 20:25:16.118685    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 20:25:16.179313    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 20:25:16.188831    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 20:25:16.262260    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:16.419576    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 20:25:16.648772    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 20:25:16.710412    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 20:25:16.733542    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 20:25:16.846284    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:17.059972    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 20:25:17.215952    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 20:25:17.266586    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 20:25:17.274805    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 20:25:17.390810    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:17.473548    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 20:25:17.606960    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 20:25:17.673109    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 20:25:17.692587    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 20:25:17.780675    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:17.862301    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 20:25:17.971489    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 20:25:18.040777    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 20:25:18.067136    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 20:25:18.132722    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:18.361877    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 20:25:18.483585    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 20:25:18.544492    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 20:25:18.550837    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 20:25:18.630213    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:18.749451    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 20:25:18.905846    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 20:25:18.967463    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 20:25:18.980086    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 20:25:19.050617    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:19.149232    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 20:25:19.347823    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 20:25:19.405784    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 20:25:19.420419    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 20:25:19.509676    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:19.599161    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 20:25:19.802875    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 20:25:19.839772    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 20:25:19.842670    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 20:25:19.931912    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:20.035947    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 20:25:20.162711    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 20:25:20.201799    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 20:25:20.222936    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 20:25:20.280547    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:20.408936    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 20:25:20.537734    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 20:25:20.567515    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 20:25:20.570029    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 20:25:20.659468    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:20.817800    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 20:25:20.995501    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 20:25:21.048427    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 20:25:21.056724    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 20:25:21.120577    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:21.304848    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 20:25:21.498812    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:21.614855    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:21.631852    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:21.760859    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 20:25:21.841442    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 20:25:21.945648    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 20:25:21.979207    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 20:25:21.997693    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 20:25:22.083745    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:22.240199    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 20:25:22.405306    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 20:25:22.438524    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 20:25:22.452603    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 20:25:22.564652    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:22.649297    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 20:25:22.778435    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 20:25:22.837867    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 20:25:22.862925    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 20:25:22.944572    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:23.136377    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 20:25:23.263042    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 20:25:23.315936    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 20:25:23.329514    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 20:25:23.493884    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:23.608199    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 20:25:23.719764    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 20:25:23.794006    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 20:25:23.816501    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 20:25:23.954815    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:24.126849    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 20:25:24.240412    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 20:25:24.273473    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 20:25:24.294643    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 20:25:24.387218    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:24.498353    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 20:25:24.656063    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 20:25:24.688645    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 20:25:24.725172    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 20:25:24.809083    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:24.916223    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 20:25:25.056886    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 20:25:25.113699    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 20:25:25.137299    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 20:25:25.317636    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:25.459563    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 20:25:25.589738    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 20:25:25.631395    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 20:25:25.659359    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 20:25:25.710165    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:25.832905    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 20:25:25.956666    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 20:25:26.008423    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 20:25:26.026021    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 20:25:26.127088    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:26.228751    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 20:25:26.456104    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 20:25:26.521265    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 20:25:26.542501    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 20:25:26.656439    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:26.876884    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 20:25:26.968213    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 20:25:27.003443    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 20:25:27.016233    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 20:25:27.126137    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:27.355416    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 20:25:27.549868    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 20:25:27.612036    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 20:25:27.627646    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 20:25:27.737149    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:27.881792    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 20:25:28.021388    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 20:25:28.063035    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 20:25:28.074890    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 20:25:28.140084    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:28.283566    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 20:25:28.394676    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 20:25:28.460906    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 20:25:28.474487    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 20:25:28.547908    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:28.678169    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 20:25:28.805663    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 20:25:28.843624    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 20:25:28.857620    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 20:25:28.990799    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:29.149113    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 20:25:29.259092    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 20:25:29.320405    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 20:25:29.329653    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 20:25:29.439956    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:29.527254    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 20:25:29.774136    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 20:25:29.843184    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 20:25:29.855615    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 20:25:29.939376    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:30.063214    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 20:25:30.246856    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 20:25:30.319607    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 20:25:30.319607    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 20:25:30.414829    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:30.536117    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 20:25:30.670234    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 20:25:30.714802    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 20:25:30.741435    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 20:25:30.816447    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:30.932643    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 20:25:31.038535    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 20:25:31.091490    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 20:25:31.111218    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 20:25:31.166544    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:31.305152    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 20:25:31.469210    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 20:25:31.636324    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 20:25:31.674918    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 20:25:31.790965    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:31.907471    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 20:25:32.170202    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 20:25:32.196674    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 20:25:32.215969    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 20:25:32.280245    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:32.398842    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 20:25:32.682681    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 20:25:32.741571    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 20:25:32.755522    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 20:25:32.847552    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:32.965743    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 20:25:33.111293    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 20:25:33.144554    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 20:25:33.155236    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 20:25:33.204824    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:33.364875    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 20:25:33.521759    7164 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 20:25:33.559881    7164 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 20:25:33.572039    7164 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 20:25:33.632689    7164 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:33.754073    7164 kubeadm.go:385] StartCluster: {Name:docker-flags-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:docker-flags-20210310201637-6496 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.7 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:25:33.764064    7164 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:25:34.602404    7164 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 20:25:34.675992    7164 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:25:34.754549    7164 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:25:34.768316    7164 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:25:34.846831    7164 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:25:34.846831    7164 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 20:25:44.085859    7164 out.go:150]   - Generating certificates and keys ...
	I0310 20:26:10.154407    7164 out.go:150]   - Booting up control plane ...
	I0310 20:29:22.717222    7164 out.go:150]   - Configuring RBAC rules ...
	I0310 20:30:51.238585    7164 cni.go:74] Creating CNI manager for ""
	I0310 20:30:51.239178    7164 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:30:51.239494    7164 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0310 20:30:51.259784    7164 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=docker-flags-20210310201637-6496 minikube.k8s.io/updated_at=2021_03_10T20_30_51_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:30:51.262612    7164 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:31:01.164730    7164 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (9.9252604s)
	I0310 20:31:01.164991    7164 ops.go:34] apiserver oom_adj: -16
	I0310 20:31:43.512863    7164 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=docker-flags-20210310201637-6496 minikube.k8s.io/updated_at=2021_03_10T20_30_51_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (52.2532033s)
	I0310 20:31:43.525351    7164 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (52.262864s)
	I0310 20:31:43.525351    7164 kubeadm.go:995] duration metric: took 52.2859813s to wait for elevateKubeSystemPrivileges.
	I0310 20:31:43.525638    7164 kubeadm.go:387] StartCluster complete in 6m9.7728696s
	I0310 20:31:43.525638    7164 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:31:43.526037    7164 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	I0310 20:31:43.533300    7164 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:31:43.547429    7164 kapi.go:59] client config for docker-flags-20210310201637-6496: &rest.Config{Host:"https://127.0.0.1:55103", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\docker-flags-20210310201637-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\docker-flags-20210310201637-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"",
DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	E0310 20:31:48.248040    7164 start.go:131] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: deployment rescale: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0310 20:31:48.248451    7164 start.go:203] Will wait 6m0s for node up to 
	I0310 20:31:48.252244    7164 out.go:129] * Verifying Kubernetes components...
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:31:48.250955    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:31:48.251324    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:31:48.828370    7164 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 20:31:49.342851    7164 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.343147    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	I0310 20:31:49.344153    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.0910729s
	I0310 20:31:49.345040    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	I0310 20:31:49.351189    7164 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.351596    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	I0310 20:31:49.352431    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.0986138s
	I0310 20:31:49.352431    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	I0310 20:31:49.353390    7164 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.353390    7164 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.356180    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	I0310 20:31:49.356906    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	I0310 20:31:49.359122    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 1.1064007s
	I0310 20:31:49.359400    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	I0310 20:31:49.359596    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.1061729s
	I0310 20:31:49.359596    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	I0310 20:31:49.686727    7164 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.687925    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	I0310 20:31:49.689293    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.4358714s
	I0310 20:31:49.689293    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	I0310 20:31:49.695313    7164 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.696104    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	I0310 20:31:49.696275    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 1.4419016s
	I0310 20:31:49.696700    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	I0310 20:31:49.721103    7164 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.722257    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	I0310 20:31:49.722257    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.4695364s
	I0310 20:31:49.722614    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	I0310 20:31:49.751177    7164 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.751898    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	I0310 20:31:49.752384    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.4993051s
	I0310 20:31:49.752601    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	I0310 20:31:49.810759    7164 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.810759    7164 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.811809    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	I0310 20:31:49.811809    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	I0310 20:31:49.813089    7164 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.813089    7164 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.813536    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	I0310 20:31:49.813755    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	I0310 20:31:49.814008    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.5449608s
	I0310 20:31:49.814008    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	I0310 20:31:49.814008    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.5601917s
	I0310 20:31:49.814008    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	I0310 20:31:49.814735    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 1.5584903s
	I0310 20:31:49.814998    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	I0310 20:31:49.815621    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 1.5555335s
	I0310 20:31:49.815621    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	I0310 20:31:49.821262    7164 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.821949    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	I0310 20:31:49.822376    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.5685601s
	I0310 20:31:49.822376    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	I0310 20:31:49.823079    7164 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.823535    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	I0310 20:31:49.823994    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 1.5671671s
	I0310 20:31:49.823994    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	I0310 20:31:49.826926    7164 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.826926    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	I0310 20:31:49.833883    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.5730024s
	I0310 20:31:49.833883    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	I0310 20:31:49.842214    7164 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.842737    7164 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.843597    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	I0310 20:31:49.843861    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	I0310 20:31:49.843861    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.5885281s
	I0310 20:31:49.843861    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	I0310 20:31:49.844043    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.5831634s
	I0310 20:31:49.844274    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	I0310 20:31:49.954392    7164 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.955202    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	I0310 20:31:49.955557    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.6970277s
	I0310 20:31:49.955744    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	I0310 20:31:49.963278    7164 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.963845    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	I0310 20:31:49.964156    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.7110773s
	I0310 20:31:49.964156    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	I0310 20:31:49.964156    7164 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.964156    7164 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.965020    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	I0310 20:31:49.965223    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.7114075s
	I0310 20:31:49.965445    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	I0310 20:31:49.965445    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	I0310 20:31:49.966222    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.7064175s
	I0310 20:31:49.966222    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	I0310 20:31:49.976817    7164 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.976817    7164 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.979335    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	I0310 20:31:49.979869    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.7238306s
	I0310 20:31:49.979869    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	I0310 20:31:49.986319    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	I0310 20:31:49.987157    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 1.7196467s
	I0310 20:31:49.987157    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	I0310 20:31:49.995681    7164 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:49.996322    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	I0310 20:31:49.996787    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.743222s
	I0310 20:31:49.996787    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	I0310 20:31:50.003436    7164 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:50.004329    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	I0310 20:31:50.004677    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.746793s
	I0310 20:31:50.004677    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	I0310 20:31:50.012527    7164 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:50.013606    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	I0310 20:31:50.013821    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.7562791s
	I0310 20:31:50.014065    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	I0310 20:31:50.019124    7164 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:50.019850    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	I0310 20:31:50.020240    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.750035s
	I0310 20:31:50.020532    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	I0310 20:31:50.022137    7164 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:50.022931    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	I0310 20:31:50.023493    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.7659507s
	I0310 20:31:50.023920    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	I0310 20:31:50.024672    7164 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:50.025324    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	I0310 20:31:50.026132    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.7667599s
	I0310 20:31:50.026132    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	I0310 20:31:50.028663    7164 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:50.029310    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	I0310 20:31:50.030243    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.769079s
	I0310 20:31:50.030243    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	I0310 20:31:50.043111    7164 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:50.043407    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	I0310 20:31:50.044111    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.7750652s
	I0310 20:31:50.044485    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	I0310 20:31:50.047394    7164 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:50.047836    7164 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:50.047836    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	I0310 20:31:50.048339    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	I0310 20:31:50.048781    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.7871424s
	I0310 20:31:50.048781    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	I0310 20:31:50.048992    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.7911076s
	I0310 20:31:50.049229    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	I0310 20:31:50.058239    7164 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:31:50.058866    7164 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	I0310 20:31:50.058866    7164 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.7967589s
	I0310 20:31:50.059573    7164 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	I0310 20:31:50.059935    7164 cache.go:73] Successfully saved all images to host disk.
	I0310 20:31:50.084372    7164 cli_runner.go:115] Run: docker container inspect docker-flags-20210310201637-6496 --format={{.State.Status}}
	I0310 20:31:50.702047    7164 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:31:50.722438    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:31:50.827548    7164 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (1.9987858s)
	I0310 20:31:50.844076    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:31:51.371660    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:31:51.491181    7164 kapi.go:59] client config for docker-flags-20210310201637-6496: &rest.Config{Host:"https://127.0.0.1:55103", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\docker-flags-20210310201637-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\docker-flags-20210310201637-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"",
DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 20:31:51.506242    7164 kubeadm.go:479] skip waiting for components based on config.
	I0310 20:31:51.506242    7164 node_conditions.go:101] verifying NodePressure condition ...
	I0310 20:31:53.362222    7164 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	I0310 20:31:53.362222    7164 node_conditions.go:122] node cpu capacity is 4
	I0310 20:31:53.362222    7164 node_conditions.go:104] duration metric: took 1.855984s to run NodePressure ...
	I0310 20:31:53.362222    7164 start.go:208] waiting for startup goroutines ...
	I0310 20:32:01.059954    7164 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (10.357929s)
	I0310 20:32:01.059954    7164 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 20:32:01.059954    7164 docker.go:429] minikube-local-cache-test:functional-20210303214129-4588 wasn't preloaded
	I0310 20:32:01.059954    7164 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210303214129-4588 minikube-local-cache-test:functional-20210310191609-6496 minikube-local-cache-test:functional-20210106002159-6856 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210308233820-5396 minikube-local-cache-test:functional-20210119220838-6552 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210128021318-232 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210304184021-4052 minikube-local-cache-test:functiona
l-20210114204234-6692 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210213143925-7440 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210225231842-5736 minikube-local-cache-test:functional-20210304002630-1156 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210107190945-8748 minikube-local-cache-test:functional-20210126212539-5172]
	I0310 20:32:01.161881    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464
	I0310 20:32:01.234809    7164 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	I0310 20:32:01.237862    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352
	I0310 20:32:01.260050    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024
	I0310 20:32:01.269246    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310083645-5040
	I0310 20:32:01.293810    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516
	I0310 20:32:01.303363    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172
	I0310 20:32:01.304823    7164 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	I0310 20:32:01.362954    7164 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	I0310 20:32:01.377276    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052
	I0310 20:32:01.391518    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156
	I0310 20:32:01.391518    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736
	I0310 20:32:01.401483    7164 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	I0310 20:32:01.430116    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210123004019-5372
	I0310 20:32:01.434115    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440
	I0310 20:32:01.443353    7164 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	I0310 20:32:01.465539    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210301195830-5700
	I0310 20:32:01.476393    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588
	I0310 20:32:01.485061    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920
	I0310 20:32:01.497094    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210128021318-232
	I0310 20:32:01.510992    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520
	I0310 20:32:01.523905    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310191609-6496
	I0310 20:32:01.531942    7164 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	I0310 20:32:01.543854    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452
	I0310 20:32:01.557675    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210224014800-800
	I0310 20:32:01.591987    7164 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	I0310 20:32:01.591987    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432
	I0310 20:32:01.612530    7164 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	I0310 20:32:01.619152    7164 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	I0310 20:32:01.627115    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944
	I0310 20:32:01.645134    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396
	I0310 20:32:01.691091    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552
	W0310 20:32:01.693355    7164 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:32:01.711987    7164 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	I0310 20:32:01.714976    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692
	I0310 20:32:01.722996    7164 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	I0310 20:32:01.724990    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056
	I0310 20:32:01.741979    7164 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	I0310 20:32:01.743028    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120214442-10992
	I0310 20:32:01.789636    7164 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120022529-1140
	I0310 20:32:01.796675    7164 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	I0310 20:32:01.808787    7164 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	W0310 20:32:01.826155    7164 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:32:01.831949    7164 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:32:01.937033    7164 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:32:01.992564    7164 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:32:01.993767    7164 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:32:02.022570    7164 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:32:02.046153    7164 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:02.046617    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	I0310 20:32:02.046853    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:32:02.047003    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:32:02.047167    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:32:02.058500    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:32:02.058500    7164 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:02.058500    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	I0310 20:32:02.058500    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:32:02.058500    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:32:02.058500    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:32:02.075468    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:32:02.076492    7164 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:02.076492    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	I0310 20:32:02.076492    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:32:02.076492    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:32:02.077147    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:32:02.089436    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:32:02.115052    7164 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:02.115052    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	I0310 20:32:02.115052    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:32:02.115052    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:32:02.115301    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:32:02.126698    7164 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:02.126698    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	I0310 20:32:02.126956    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:32:02.126956    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:32:02.127273    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:32:02.128185    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:32:02.147071    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:32:02.148070    7164 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:02.148070    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	I0310 20:32:02.148070    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:32:02.148070    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:32:02.148070    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:32:02.158130    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:32:02.180068    7164 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:02.180961    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	I0310 20:32:02.180961    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:32:02.180961    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:32:02.181194    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:32:02.194006    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	W0310 20:32:05.783648    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:05.783648    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: "minikube-local-cache-test:functional-20210213143925-7440" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:05.783648    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:32:05.783648    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:32:05.783648    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:32:05.796391    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:32:05.803969    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:06.426749    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	W0310 20:32:07.604944    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.604944    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.604944    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: NewSession: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.604944    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.604944    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: "minikube-local-cache-test:functional-20210128021318-232" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.604944    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 20:32:07.604944    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	W0310 20:32:07.604944    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.604944    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: NewSession: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.604944    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.605881    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 20:32:07.605881    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.605881    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: "minikube-local-cache-test:functional-20210224014800-800" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.605881    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 20:32:07.605881    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: "minikube-local-cache-test:functional-20210303214129-4588" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.605881    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 20:32:07.604944    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: "minikube-local-cache-test:functional-20210301195830-5700" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.605881    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: "minikube-local-cache-test:functional-20210309234032-4944" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.606330    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:32:07.606330    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:32:07.605881    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	I0310 20:32:07.606330    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 20:32:07.606819    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:32:07.606330    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:32:07.606819    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:32:07.606819    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.607303    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: "minikube-local-cache-test:functional-20210120175851-7432" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 20:32:07.604944    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.607303    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: "minikube-local-cache-test:functional-20210219220622-3920" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.607303    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:32:07.607303    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:32:07.607303    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: "minikube-local-cache-test:functional-20210310191609-6496" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.607303    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:32:07.607303    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:32:07.607721    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:32:07.607303    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:32:07.605881    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:32:07.608795    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:32:07.608795    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.604944    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:07.605881    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.607303    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:32:07.605881    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: "minikube-local-cache-test:functional-20210219145454-9520" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.609592    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: "minikube-local-cache-test:functional-20210306072141-12056" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.609755    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: "minikube-local-cache-test:functional-20210308233820-5396" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.609755    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: "minikube-local-cache-test:functional-20210120022529-1140" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.609755    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: NewSession: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.609755    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: "minikube-local-cache-test:functional-20210220004129-7452" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.609755    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: NewSession: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.609755    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: "minikube-local-cache-test:functional-20210120214442-10992" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.609755    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: NewSession: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.609755    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: "minikube-local-cache-test:functional-20210114204234-6692" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.610902    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: "minikube-local-cache-test:functional-20210119220838-6552" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:07.610902    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:32:07.610902    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:32:07.610902    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:32:07.611395    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	I0310 20:32:07.611588    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:32:07.611588    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	I0310 20:32:07.610902    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:32:07.610902    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: NewSession: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.610902    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: NewSession: ssh: rejected: connect failed (open failed)
	I0310 20:32:07.612355    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:32:07.612513    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	I0310 20:32:07.612513    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	I0310 20:32:07.612513    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:32:07.610902    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:32:07.612887    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:32:07.613196    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:32:07.610902    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:32:07.613569    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:32:07.613569    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:32:07.610902    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:32:07.614000    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:32:07.614363    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:32:07.610902    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:32:07.614363    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:32:07.614711    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:32:07.611395    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:32:07.611395    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	I0310 20:32:07.610902    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:32:07.611588    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:32:07.619154    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:32:07.619498    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:32:07.619889    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:32:07.620276    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:32:07.751354    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:07.856154    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 20:32:07.861434    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:32:07.906427    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:07.916963    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:32:07.968578    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:07.975600    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:07.992622    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.013127    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.014678    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 20:32:08.021309    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:32:08.026969    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:32:08.028499    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.029609    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:32:08.032316    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:32:08.035246    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:32:08.052711    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.101705    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:32:08.114919    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.118123    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.124139    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:32:08.129046    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.136334    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.137373    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.137373    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.141333    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.148379    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.150367    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.150367    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:32:08.150367    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.154707    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:32:08.154707    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:32:08.156400    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:32:08.156828    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:32:08.175115    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.186722    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.190285    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.192257    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:08.198718    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:09.096763    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.3450029s)
	I0310 20:32:09.096763    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.487810    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.581386s)
	I0310 20:32:09.487977    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.608604    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.5558966s)
	I0310 20:32:09.608945    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.690015    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.6764817s)
	I0310 20:32:09.690015    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.719590    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.5822204s)
	I0310 20:32:09.720679    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.810520    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.6731514s)
	I0310 20:32:09.810520    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.812677    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.6375654s)
	I0310 20:32:09.812952    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.822762    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.6723992s)
	I0310 20:32:09.824260    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.828297    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.7994559s)
	I0310 20:32:09.828974    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.861595    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.6748766s)
	I0310 20:32:09.862281    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.875375    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.9062156s)
	I0310 20:32:09.882627    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.7645083s)
	I0310 20:32:09.883216    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.882627    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.925257    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.7962148s)
	I0310 20:32:09.925888    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.961135    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.8198057s)
	I0310 20:32:09.961440    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.981525    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.9883418s)
	I0310 20:32:09.981905    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.982844    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.8465137s)
	I0310 20:32:09.982973    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:09.998668    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.8502933s)
	I0310 20:32:09.998668    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:10.029251    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.878506s)
	I0310 20:32:10.029465    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:10.050055    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (2.0744598s)
	I0310 20:32:10.050055    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:10.090745    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.975831s)
	I0310 20:32:10.090745    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:10.102235    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.9099822s)
	I0310 20:32:10.103311    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:10.125484    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.9266091s)
	I0310 20:32:10.125619    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:10.221833    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (2.0315526s)
	I0310 20:32:10.222486    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:11.061035    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:11.061035    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:32:11.061035    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:32:11.061035    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:11.061372    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:32:11.061372    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:32:11.061372    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:32:11.061690    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:32:11.067340    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:11.067597    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:32:11.067597    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:32:11.067597    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:11.067915    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:32:11.067915    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:32:11.067915    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:32:11.067915    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:11.067915    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:11.067915    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:32:11.067915    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:32:11.067915    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:32:11.067915    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:11.067915    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:32:11.067915    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:11.067915    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:32:11.067915    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:32:11.067915    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:32:11.068427    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:32:11.067915    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:11.068427    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:32:11.068427    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:32:11.067915    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:32:11.069105    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:32:11.067915    7164 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:11.067915    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 20:32:11.067915    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:32:11.069646    7164 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:32:11.069646    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:32:11.069646    7164 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 20:32:11.070094    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:32:11.070094    7164 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 20:32:11.126759    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:32:11.130246    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:32:11.188627    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:32:11.249785    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:32:11.250534    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:32:11.254948    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:32:11.264582    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:11.264582    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:11.288418    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:32:11.289967    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:32:11.292022    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 20:32:11.293420    7164 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:32:11.294127    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:11.309601    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:11.316446    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:11.316446    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:11.360024    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:11.363865    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:11.369967    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:11.375090    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:12.257895    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:12.296846    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.0315062s)
	I0310 20:32:12.297401    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:12.319260    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.0539016s)
	I0310 20:32:12.319477    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:12.320322    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:12.348444    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.032s)
	I0310 20:32:12.349152    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:12.358301    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.0484414s)
	I0310 20:32:12.358666    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:12.387360    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.0173952s)
	I0310 20:32:12.387913    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:12.403100    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.0832749s)
	I0310 20:32:12.403320    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:12.413431    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.0534096s)
	I0310 20:32:12.413844    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:12.417601    7164 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496: (1.042513s)
	I0310 20:32:12.417830    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	W0310 20:32:16.623769    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:16.624180    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:16.624458    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396 (4096 bytes)
	I0310 20:32:16.633439    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:17.267337    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	W0310 20:32:18.217849    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 20:32:18.548849    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 20:32:19.193356    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:19.193751    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:19.193751    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	I0310 20:32:19.200445    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	W0310 20:32:19.448992    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:19.448992    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:19.448992    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056 (4096 bytes)
	I0310 20:32:19.457541    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	W0310 20:32:19.467163    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:19.468183    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:19.468183    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692 (4096 bytes)
	I0310 20:32:19.484003    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	W0310 20:32:19.537597    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:19.537597    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:19.537990    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464 (4096 bytes)
	I0310 20:32:19.548500    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	W0310 20:32:19.576767    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:19.576767    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:19.577420    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	I0310 20:32:19.594826    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	W0310 20:32:19.821527    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:19.821527    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:19.822088    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	I0310 20:32:19.841973    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:19.979792    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:20.246065    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:20.299924    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:20.299924    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:20.378686    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:20.527157    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	W0310 20:32:20.966531    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:20.966531    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: NewSession: new client: new client: ssh: handshake failed: EOF
	W0310 20:32:20.966531    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:20.966531    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:20.966531    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024 (4096 bytes)
	I0310 20:32:20.966870    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372 (4096 bytes)
	I0310 20:32:20.977243    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:20.987420    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:32:21.611057    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:21.625102    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	W0310 20:32:22.228699    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:22.228699    7164 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:22.228699    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352 (4096 bytes)
	I0310 20:32:22.238757    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	W0310 20:32:22.631089    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:22.861267    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	W0310 20:32:23.685828    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 20:32:23.810741    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 20:32:25.332260    7164 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:33:08.530330    7164 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464: (1m7.3685899s)
	I0310 20:33:23.804429    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:33:23.814146    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:33:26.020510    7164 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736: (1m24.6291663s)
	I0310 20:33:26.021356    7164 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516: (1m24.7275232s)
	I0310 20:33:26.021356    7164 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352: (1m24.7836673s)
	I0310 20:33:26.020510    7164 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210123004019-5372: (1m24.5905679s)
	I0310 20:33:26.698165    7164 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024: (1m25.4382904s)
	I0310 20:33:26.698165    7164 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156: (1m25.3068222s)
	I0310 20:33:26.713832    7164 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310083645-5040: (1m25.4447615s)
	I0310 20:33:28.140507    7164 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172: (1m26.8373215s)
	I0310 20:33:28.140507    7164 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052: (1m26.7634091s)
	I0310 20:33:28.140908    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (1m22.3446847s)
	I0310 20:33:28.140908    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (1m20.2796374s)
	I0310 20:33:28.140908    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440 (4096 bytes)
	I0310 20:33:28.140908    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (1m19.9907041s)
	I0310 20:33:28.140908    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	I0310 20:33:28.141456    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (1m20.1120107s)
	I0310 20:33:28.141456    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944 (4096 bytes)
	I0310 20:33:28.141456    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: (1m19.9869119s)
	I0310 20:33:28.141994    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (1m20.126941s)
	I0310 20:33:28.141994    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: (1m20.1151886s)
	I0310 20:33:28.141994    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (1m16.8404222s)
	I0310 20:33:28.141994    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800 (4096 bytes)
	I0310 20:33:28.142486    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588 (4096 bytes)
	I0310 20:33:28.142486    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	I0310 20:33:28.142486    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: (1m20.040944s)
	I0310 20:33:28.140908    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700 (4096 bytes)
	I0310 20:33:28.141994    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520 (4096 bytes)
	I0310 20:33:28.142486    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (1m20.1072398s)
	I0310 20:33:28.142486    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (1m17.0123963s)
	I0310 20:33:28.142486    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992 (4096 bytes)
	I0310 20:33:28.141456    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: (1m20.109303s)
	I0310 20:33:28.142486    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: (1m20.2861818s)
	I0310 20:33:28.142486    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040 (4096 bytes)
	I0310 20:33:28.142486    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552 (4096 bytes)
	I0310 20:33:28.142922    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (1m16.8876939s)
	I0310 20:33:28.141456    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: (1m20.2246564s)
	I0310 20:33:28.142922    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	I0310 20:33:28.142922    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	I0310 20:33:28.141994    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (1m16.9527654s)
	I0310 20:33:28.141994    7164 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: (1m20.0180183s)
	I0310 20:33:28.151389    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	I0310 20:33:28.142922    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	I0310 20:33:28.142922    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	I0310 20:33:28.142922    7164 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052 (4096 bytes)
	W0310 20:33:28.179536    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:33:28.179536    7164 retry.go:31] will retry after 276.165072ms: ssh: rejected: connect failed (open failed)
	W0310 20:33:28.180026    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:33:28.180026    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:33:28.180026    7164 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	I0310 20:33:28.180026    7164 retry.go:31] will retry after 291.140013ms: ssh: rejected: connect failed (open failed)
	W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:33:28.180338    7164 retry.go:31] will retry after 234.428547ms: ssh: rejected: connect failed (open failed)
	W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:33:28.180798    7164 retry.go:31] will retry after 231.159374ms: ssh: rejected: connect failed (open failed)
	W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:33:28.181072    7164 retry.go:31] will retry after 141.409254ms: ssh: rejected: connect failed (open failed)
	W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:33:28.181316    7164 retry.go:31] will retry after 164.129813ms: ssh: rejected: connect failed (open failed)
	W0310 20:33:28.180338    7164 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:33:28.181316    7164 retry.go:31] will retry after 149.242379ms: ssh: rejected: connect failed (open failed)
	I0310 20:33:28.180798    7164 retry.go:31] will retry after 296.705768ms: ssh: rejected: connect failed (open failed)
	I0310 20:33:28.334511    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:33:28.340463    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:33:28.359901    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:33:28.423340    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:33:28.423340    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:33:28.469280    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:33:28.496050    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:33:28.520697    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:33:28.559931    7164 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20210310201637-6496
	I0310 20:33:29.257542    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:33:29.313487    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:33:29.314782    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:33:29.345908    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:33:29.405877    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:33:29.417399    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:33:29.431183    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:33:29.448450    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:33:29.504625    7164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins\.minikube\machines\docker-flags-20210310201637-6496\id_rsa Username:docker}
	I0310 20:35:20.475320    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (1m56.6611804s)
	I0310 20:35:20.475453    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 from cache
	I0310 20:35:20.475918    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:35:20.486247    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:35:31.914119    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: (11.427891s)
	I0310 20:35:31.915118    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 from cache
	I0310 20:35:31.915118    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:35:31.917220    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:35:58.582875    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (26.665698s)
	I0310 20:35:58.583305    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 from cache
	I0310 20:35:58.583305    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:35:58.592511    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:36:31.458542    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: (32.8660831s)
	I0310 20:36:31.458542    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 from cache
	I0310 20:36:31.458542    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:36:31.465944    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:37:39.757189    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: (1m8.2911504s)
	I0310 20:37:39.757337    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 from cache
	I0310 20:37:39.757337    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:37:39.767352    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:38:10.778487    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: (31.0111813s)
	I0310 20:38:10.778925    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 from cache
	I0310 20:38:10.778925    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:38:10.788044    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:38:30.987549    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: (20.1995361s)
	I0310 20:38:30.987864    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 from cache
	I0310 20:38:30.987864    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:38:31.008978    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:38:53.021946    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: (22.0129983s)
	I0310 20:38:53.022334    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 from cache
	I0310 20:38:53.022334    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 20:38:53.031837    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 20:39:07.822147    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: (14.7899755s)
	I0310 20:39:07.822147    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 from cache
	I0310 20:39:07.822147    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:39:07.830240    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:39:24.092618    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: (16.262403s)
	I0310 20:39:24.092971    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 from cache
	I0310 20:39:24.093238    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:39:24.109641    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:39:34.574202    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: (10.4640343s)
	I0310 20:39:34.574202    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 from cache
	I0310 20:39:34.574202    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:39:34.581946    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:39:55.898729    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: (21.3166029s)
	I0310 20:39:55.898729    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 from cache
	I0310 20:39:55.898966    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:39:55.905917    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:40:30.039752    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (34.1338864s)
	I0310 20:40:30.040190    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 from cache
	I0310 20:40:30.040190    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:40:30.053070    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:41:24.214911    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (54.1610286s)
	I0310 20:41:24.214911    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 from cache
	I0310 20:41:24.214911    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:41:24.230173    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:42:29.912205    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: (1m5.6816904s)
	I0310 20:42:29.916226    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 from cache
	I0310 20:42:29.916226    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:42:29.924045    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:42:55.564122    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (25.6398471s)
	I0310 20:42:55.564122    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 from cache
	I0310 20:42:55.564122    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:42:55.581284    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:43:32.745668    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (37.164152s)
	I0310 20:43:32.745668    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 from cache
	I0310 20:43:32.745668    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 20:43:32.756293    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 20:44:04.083338    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (31.326944s)
	I0310 20:44:04.083338    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 from cache
	I0310 20:44:04.083338    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:44:04.093859    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:44:37.731465    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (33.6370634s)
	I0310 20:44:37.731465    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 from cache
	I0310 20:44:37.731465    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:44:37.741884    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:45:08.341903    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (30.6000593s)
	I0310 20:45:08.341903    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 from cache
	I0310 20:45:08.342156    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:45:08.350044    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:45:26.736715    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (18.3866955s)
	I0310 20:45:26.737468    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 from cache
	I0310 20:45:26.737468    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:45:26.748904    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:45:50.023275    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (23.2729983s)
	I0310 20:45:50.023418    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 from cache
	I0310 20:45:50.023418    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:45:50.036621    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p docker-flags-20210310201637-6496 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker" : exit status 1

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20210310201637-6496 ssh "sudo systemctl show docker --property=Environment --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:49: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20210310201637-6496 ssh "sudo systemctl show docker --property=Environment --no-pager": context deadline exceeded (2.0244ms)

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: failed to 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20210310201637-6496 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": context deadline exceeded

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:56: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *""*.
docker_test.go:56: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *""*.
docker_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20210310201637-6496 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:60: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p docker-flags-20210310201637-6496 ssh "sudo systemctl show docker --property=ExecStart --no-pager": context deadline exceeded (0s)
docker_test.go:62: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-windows-amd64.exe -p docker-flags-20210310201637-6496 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": context deadline exceeded
docker_test.go:66: expected "out/minikube-windows-amd64.exe -p docker-flags-20210310201637-6496 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: ""
panic.go:617: *** TestDockerFlags FAILED at 2021-03-10 20:46:40.1198287 +0000 GMT m=+6139.822035001
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestDockerFlags]: docker inspect <======

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:227: (dbg) Run:  docker inspect docker-flags-20210310201637-6496

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:231: (dbg) docker inspect docker-flags-20210310201637-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68b2d074521a54ac6c4c4eff0f25e5ba380c4a4ab432e9ee5e1cab6da4b7e584",
	        "Created": "2021-03-10T20:16:57.1445005Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 124563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:17:03.7344621Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/68b2d074521a54ac6c4c4eff0f25e5ba380c4a4ab432e9ee5e1cab6da4b7e584/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68b2d074521a54ac6c4c4eff0f25e5ba380c4a4ab432e9ee5e1cab6da4b7e584/hostname",
	        "HostsPath": "/var/lib/docker/containers/68b2d074521a54ac6c4c4eff0f25e5ba380c4a4ab432e9ee5e1cab6da4b7e584/hosts",
	        "LogPath": "/var/lib/docker/containers/68b2d074521a54ac6c4c4eff0f25e5ba380c4a4ab432e9ee5e1cab6da4b7e584/68b2d074521a54ac6c4c4eff0f25e5ba380c4a4ab432e9ee5e1cab6da4b7e584-json.log",
	        "Name": "/docker-flags-20210310201637-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": [
	            "12dddc36df4d05e3bc89896e5fa4719683a5ea6c851ce12b2774893ee6faad27"
	        ],
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "docker-flags-20210310201637-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 1887436800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 1887436800,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4befb3c7c80711699a8f5e08fed121e11b254b927e90279364337287ce5bf8e9-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4befb3c7c80711699a8f5e08fed121e11b254b927e90279364337287ce5bf8e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4befb3c7c80711699a8f5e08fed121e11b254b927e90279364337287ce5bf8e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4befb3c7c80711699a8f5e08fed121e11b254b927e90279364337287ce5bf8e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "docker-flags-20210310201637-6496",
	                "Source": "/var/lib/docker/volumes/docker-flags-20210310201637-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "docker-flags-20210310201637-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "docker-flags-20210310201637-6496",
	                "name.minikube.sigs.k8s.io": "docker-flags-20210310201637-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3907ac1e114a2e7ef024b56b40d0ab4a7a362fbfde728b8ea0c106d91a4b0e1e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55112"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55107"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55103"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3907ac1e114a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "ac75ca25e4bb13cab1f2f2fc939a3d69b2b65e321aa7c011e5e97e16e78a3870",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.7",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:07",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "ac75ca25e4bb13cab1f2f2fc939a3d69b2b65e321aa7c011e5e97e16e78a3870",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.7",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:07",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20210310201637-6496 -n docker-flags-20210310201637-6496
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p docker-flags-20210310201637-6496 -n docker-flags-20210310201637-6496: (37.1311371s)
helpers_test.go:240: <<< TestDockerFlags FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestDockerFlags]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20210310201637-6496 logs -n 25

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20210310201637-6496 logs -n 25: (1m45.2714034s)
helpers_test.go:248: TestDockerFlags logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:17:13 UTC, end at Wed 2021-03-10 20:48:11 UTC. --
	* Mar 10 20:48:05 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:05.829358600Z" level=debug msg="Calling GET /v1.40/containers/json?filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:05 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:05.869055600Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%2C%22status%22%3A%7B%22running%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:06 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:06.558489400Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:06 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:06.676608400Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:06 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:06.796344100Z" level=debug msg="Calling GET /v1.40/version"
	* Mar 10 20:48:06 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:06.839685600Z" level=debug msg="Calling HEAD /_ping"
	* Mar 10 20:48:06 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:06.862664500Z" level=debug msg="Calling GET /v1.41/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_storage-provisioner%22%3Atrue%7D%7D"
	* Mar 10 20:48:07 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:07.723040600Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:08 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:08.041982900Z" level=debug msg="Calling GET /v1.40/containers/json?filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:08 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:08.196059800Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:08 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:08.321370500Z" level=debug msg="Calling POST /v1.40/containers/create?name=k8s_coredns_coredns-74ff55c5b-58x2b_kube-system_010fcac7-13e1-4b0e-97e9-4836952b22b6_1"
	* Mar 10 20:48:08 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:08.524178200Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%2C%22status%22%3A%7B%22running%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:08 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:08.814605500Z" level=debug msg="Calling GET /v1.40/containers/json?filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:09 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:09.154985100Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%2C%22status%22%3A%7B%22running%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:09 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:09.606210300Z" level=debug msg="Calling GET /v1.40/images/json"
	* Mar 10 20:48:09 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:09.903519800Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:10 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:10.187041300Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:10 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:10.408353300Z" level=debug msg="Calling HEAD /_ping"
	* Mar 10 20:48:10 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:10.420768000Z" level=debug msg="Calling GET /v1.41/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_kube-controller-manager%22%3Atrue%7D%7D"
	* Mar 10 20:48:10 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:10.555091300Z" level=debug msg="Calling GET /v1.40/containers/json?filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:10 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:10.912930200Z" level=debug msg="Calling GET /v1.40/version"
	* Mar 10 20:48:10 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:10.955681800Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%2C%22status%22%3A%7B%22running%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:11 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:11.241694200Z" level=debug msg="Calling GET /v1.40/containers/json?filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:11 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:11.428327900Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D&limit=0"
	* Mar 10 20:48:11 docker-flags-20210310201637-6496 dockerd[749]: time="2021-03-10T20:48:11.567590600Z" level=debug msg="Calling GET /v1.40/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%2C%22status%22%3A%7B%22running%22%3Atrue%7D%7D&limit=0"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 99c65df61d53a       bfe3a36ebd252       9 minutes ago       Running             coredns                   0                   906f411f4cfc5
	* 19359283e652a       bfe3a36ebd252       12 minutes ago      Exited              coredns                   0                   223b7b004d9f8
	* 2bb65e83dfaea       43154ddb57a83       12 minutes ago      Running             kube-proxy                0                   9754346bf5c8a
	* 14b2f2d186099       a27166429d98e       19 minutes ago      Running             kube-controller-manager   1                   4d8b92d22bc00
	* 5defdb54c61d7       a8c2fdb8bf76e       21 minutes ago      Running             kube-apiserver            0                   151b272fddec6
	* 1dd6585a2bd66       ed2c44fbdd78b       21 minutes ago      Running             kube-scheduler            0                   4ffb3dd6ff5e2
	* d15b5f6bee4df       0369cf4303ffd       21 minutes ago      Running             etcd                      0                   2a4d102454e9e
	* 
	* ==> coredns [19359283e652] <==
	* I0310 20:36:16.674498       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 20:35:55.5725873 +0000 UTC m=+0.903489501) (total time: 21.0954701s):
	* Trace[2019727887]: [21.0954701s] [21.0954701s] END
	* E0310 20:36:16.674588       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 20:36:16.675757       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 20:35:55.5807843 +0000 UTC m=+0.911686601) (total time: 21.0885644s):
	* Trace[939984059]: [21.0885644s] [21.0885644s] END
	* E0310 20:36:16.675781       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 20:36:16.675948       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 20:35:55.5671527 +0000 UTC m=+0.898055001) (total time: 21.1027633s):
	* Trace[1474941318]: [21.1027633s] [21.1027633s] END
	* E0310 20:36:16.676013       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* 
	* ==> coredns [99c65df61d53] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               docker-flags-20210310201637-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=docker-flags-20210310201637-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=docker-flags-20210310201637-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T20_30_51_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 20:28:48 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  docker-flags-20210310201637-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 20:48:27 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 20:47:45 +0000   Wed, 10 Mar 2021 20:43:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 20:47:45 +0000   Wed, 10 Mar 2021 20:43:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 20:47:45 +0000   Wed, 10 Mar 2021 20:43:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 20:47:45 +0000   Wed, 10 Mar 2021 20:47:45 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.7
	*   Hostname:    docker-flags-20210310201637-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                12e5de3c-e102-49b8-b453-7bb0a6f5e705
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-74ff55c5b-58x2b                                     100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	*   kube-system                 coredns-74ff55c5b-l58kt                                     100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	*   kube-system                 etcd-docker-flags-20210310201637-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         18m
	*   kube-system                 kube-apiserver-docker-flags-20210310201637-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         18m
	*   kube-system                 kube-controller-manager-docker-flags-20210310201637-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	*   kube-system                 kube-proxy-2chq5                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	*   kube-system                 kube-scheduler-docker-flags-20210310201637-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         18m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                850m (21%)  0 (0%)
	*   memory             240Mi (1%)  340Mi (1%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                  From        Message
	*   ----    ------                   ----                 ----        -------
	*   Normal  Starting                 16m                  kubelet     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  15m                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 12m                  kube-proxy  Starting kube-proxy.
	*   Normal  NodeHasNoDiskPressure    5m22s (x2 over 16m)  kubelet     Node docker-flags-20210310201637-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     5m22s (x2 over 16m)  kubelet     Node docker-flags-20210310201637-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeHasSufficientMemory  5m22s (x2 over 16m)  kubelet     Node docker-flags-20210310201637-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeNotReady             86s (x2 over 16m)    kubelet     Node docker-flags-20210310201637-6496 status is now: NodeNotReady
	*   Normal  NodeReady                49s (x3 over 15m)    kubelet     Node docker-flags-20210310201637-6496 status is now: NodeReady
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [d15b5f6bee4d] <==
	* 2021-03-10 20:47:08.854153 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (887.839ms) to execute
	* 2021-03-10 20:47:09.503038 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-74ff55c5b-l58kt\" " with result "range_response_count:1 size:4643" took too long (488.9835ms) to execute
	* 2021-03-10 20:47:09.602608 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (117.3ms) to execute
	* 2021-03-10 20:47:10.356390 W | etcdserver: read-only range request "key:\"/registry/masterleases/172.17.0.7\" " with result "range_response_count:0 size:5" took too long (103.9871ms) to execute
	* 2021-03-10 20:47:13.502376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:47:20.606352 I | mvcc: store.index: compact 631
	* 2021-03-10 20:47:20.621508 I | mvcc: finished scheduled compaction at 631 (took 14.594ms)
	* 2021-03-10 20:47:20.965234 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:47:25.795335 W | etcdserver: request "header:<ID:3266086224249882857 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.7\" mod_revision:812 > success:<request_put:<key:\"/registry/masterleases/172.17.0.7\" value_size:65 lease:3266086224249882855 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.7\" > >>" with result "size:16" took too long (542.8872ms) to execute
	* 2021-03-10 20:47:25.898412 W | etcdserver: read-only range request "key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true " with result "range_response_count:0 size:7" took too long (168.3134ms) to execute
	* 2021-03-10 20:47:26.651776 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:129" took too long (119.3804ms) to execute
	* 2021-03-10 20:47:28.020853 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:47:32.135416 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0\" " with result "range_response_count:1 size:965" took too long (258.3502ms) to execute
	* 2021-03-10 20:47:32.152316 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (264.8365ms) to execute
	* 2021-03-10 20:47:32.354956 W | etcdserver: request "header:<ID:3266086224249882894 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0\" mod_revision:823 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0\" value_size:839 lease:3266086224249882787 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0\" > >>" with result "size:16" took too long (159.287ms) to execute
	* 2021-03-10 20:47:41.034539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:47:42.082426 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (351.2862ms) to execute
	* 2021-03-10 20:47:42.083430 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (647.4863ms) to execute
	* 2021-03-10 20:47:46.459972 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:47:58.070581 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:06.248287 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:16.786051 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:26.499112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:36.564926 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:40.866758 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true " with result "range_response_count:0 size:5" took too long (166.2513ms) to execute
	* 
	* ==> kernel <==
	*  20:48:44 up  1:49,  0 users,  load average: 182.94, 172.58, 136.43
	* Linux docker-flags-20210310201637-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [5defdb54c61d] <==
	* Trace[1063418607]: ---"Transaction prepared" 154ms (20:47:00.200)
	* Trace[1063418607]: ---"Transaction committed" 656ms (20:47:00.857)
	* Trace[1063418607]: [948.6871ms] [948.6871ms] END
	* I0310 20:47:32.397361       1 trace.go:205] Trace[51877013]: "GuaranteedUpdate etcd3" type:*core.Event (10-Mar-2021 20:47:31.867) (total time: 530ms):
	* Trace[51877013]: ---"initial value restored" 294ms (20:47:00.161)
	* Trace[51877013]: ---"Transaction committed" 219ms (20:47:00.397)
	* Trace[51877013]: [530.1175ms] [530.1175ms] END
	* I0310 20:47:32.403339       1 trace.go:205] Trace[807499374]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:172.17.0.7 (10-Mar-2021 20:47:31.866) (total time: 531ms):
	* Trace[807499374]: ---"About to apply patch" 295ms (20:47:00.161)
	* Trace[807499374]: ---"Object stored in database" 219ms (20:47:00.397)
	* Trace[807499374]: [531.0314ms] [531.0314ms] END
	* I0310 20:47:32.415036       1 trace.go:205] Trace[1552082745]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.20.2 (linux/amd64) kubernetes/faecb19,client:127.0.0.1 (10-Mar-2021 20:47:31.868) (total time: 523ms):
	* Trace[1552082745]: ---"About to write a response" 523ms (20:47:00.391)
	* Trace[1552082745]: [523.3536ms] [523.3536ms] END
	* I0310 20:47:42.124427       1 trace.go:205] Trace[1267897388]: "List etcd3" key:/jobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (10-Mar-2021 20:47:41.415) (total time: 708ms):
	* Trace[1267897388]: [708.8607ms] [708.8607ms] END
	* I0310 20:47:42.124627       1 trace.go:205] Trace[2067175325]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.20.2 (linux/amd64) kubernetes/faecb19/system:serviceaccount:kube-system:cronjob-controller,client:172.17.0.7 (10-Mar-2021 20:47:41.415) (total time: 709ms):
	* Trace[2067175325]: ---"Listing from storage done" 708ms (20:47:00.124)
	* Trace[2067175325]: [709.1763ms] [709.1763ms] END
	* I0310 20:47:50.732182       1 client.go:360] parsed scheme: "passthrough"
	* I0310 20:47:50.732348       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 20:47:50.732372       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 20:48:36.006237       1 client.go:360] parsed scheme: "passthrough"
	* I0310 20:48:36.006390       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 20:48:36.006590       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [14b2f2d18609] <==
	* I0310 20:31:43.521115       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 20:31:43.521150       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 20:31:43.580211       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 20:31:47.085731       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	* E0310 20:31:48.399135       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	* E0310 20:31:48.426937       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	* I0310 20:31:49.271840       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2chq5"
	* I0310 20:31:51.012949       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-l58kt"
	* I0310 20:31:53.024045       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-58x2b"
	* I0310 20:32:35.081453       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 20:33:25.400991       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* I0310 20:42:18.383212       1 event.go:291] "Event occurred" object="docker-flags-20210310201637-6496" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node docker-flags-20210310201637-6496 status is now: NodeNotReady"
	* I0310 20:42:21.692765       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-docker-flags-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:23.168519       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-docker-flags-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:24.624711       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-docker-flags-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:25.588159       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-2chq5" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:26.210516       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-l58kt" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:27.827611       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-58x2b" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:31.327379       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 20:42:31.336473       1 event.go:291] "Event occurred" object="kube-system/etcd-docker-flags-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:43:19.959108       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-58x2b" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-58x2b"
	* I0310 20:43:19.959186       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-l58kt" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-l58kt"
	* I0310 20:43:20.024901       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* I0310 20:47:15.650877       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 20:47:50.778360       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-proxy [2bb65e83dfae] <==
	* I0310 20:36:06.332678       1 server_others.go:142] kube-proxy node IP is an IPv4 address (172.17.0.7), assume IPv4 operation
	* W0310 20:36:07.695906       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 20:36:07.844898       1 server_others.go:185] Using iptables Proxier.
	* I0310 20:36:07.902161       1 server.go:650] Version: v1.20.2
	* I0310 20:36:07.941097       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 20:36:07.941513       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 20:36:07.941625       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 20:36:07.966434       1 config.go:315] Starting service config controller
	* I0310 20:36:07.966487       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 20:36:07.966547       1 config.go:224] Starting endpoint slice config controller
	* I0310 20:36:07.966558       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 20:36:08.367611       1 shared_informer.go:247] Caches are synced for service config 
	* I0310 20:36:08.777722       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 20:36:16.260599       1 trace.go:205] Trace[2120078758]: "iptables restore" (10-Mar-2021 20:36:13.125) (total time: 3134ms):
	* Trace[2120078758]: [3.1349904s] [3.1349904s] END
	* I0310 20:41:43.119658       1 trace.go:205] Trace[507993491]: "iptables Monitor CANARY check" (10-Mar-2021 20:41:40.663) (total time: 2351ms):
	* Trace[507993491]: [2.3519754s] [2.3519754s] END
	* I0310 20:45:42.843954       1 trace.go:205] Trace[752790286]: "iptables Monitor CANARY check" (10-Mar-2021 20:45:40.807) (total time: 2036ms):
	* Trace[752790286]: [2.0363819s] [2.0363819s] END
	* I0310 20:46:51.285042       1 trace.go:205] Trace[514022387]: "iptables save" (10-Mar-2021 20:46:45.545) (total time: 5182ms):
	* Trace[514022387]: [5.1824213s] [5.1824213s] END
	* I0310 20:46:53.617960       1 trace.go:205] Trace[1145862558]: "iptables restore" (10-Mar-2021 20:46:51.386) (total time: 2231ms):
	* Trace[1145862558]: [2.2310929s] [2.2310929s] END
	* I0310 20:47:54.866115       1 trace.go:205] Trace[1468481860]: "iptables restore" (10-Mar-2021 20:47:49.873) (total time: 4992ms):
	* Trace[1468481860]: [4.9929414s] [4.9929414s] END
	* 
	* ==> kube-scheduler [1dd6585a2bd6] <==
	* E0310 20:28:51.357499       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:28:51.441651       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:28:51.615687       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:28:52.333549       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:28:52.417522       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 20:28:53.010998       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 20:28:53.121413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:28:53.281035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:28:53.962806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:28:54.037078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:28:54.230689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:29:00.756913       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:29:00.803627       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:29:01.260395       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:29:01.462877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:29:02.019831       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:29:02.699815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:29:03.015097       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:29:03.621072       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:29:03.797602       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:29:04.713955       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:29:05.093622       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 20:29:05.336783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 20:29:15.991692       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I0310 20:29:57.664352       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:17:13 UTC, end at Wed 2021-03-10 20:48:57 UTC. --
	* Mar 10 20:42:44 docker-flags-20210310201637-6496 kubelet[3723]: Trace[79452245]: ---"Objects listed" 12732ms (20:42:00.996)
	* Mar 10 20:42:44 docker-flags-20210310201637-6496 kubelet[3723]: Trace[79452245]: [12.7324188s] [12.7324188s] END
	* Mar 10 20:42:45 docker-flags-20210310201637-6496 kubelet[3723]: I0310 20:42:45.531534    3723 trace.go:205] Trace[993518993]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:438 (10-Mar-2021 20:42:34.355) (total time: 11175ms):
	* Mar 10 20:42:45 docker-flags-20210310201637-6496 kubelet[3723]: Trace[993518993]: ---"Objects listed" 11175ms (20:42:00.531)
	* Mar 10 20:42:45 docker-flags-20210310201637-6496 kubelet[3723]: Trace[993518993]: [11.1752977s] [11.1752977s] END
	* Mar 10 20:42:47 docker-flags-20210310201637-6496 kubelet[3723]: I0310 20:42:47.164517    3723 trace.go:205] Trace[1534937769]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 (10-Mar-2021 20:42:37.131) (total time: 10031ms):
	* Mar 10 20:42:47 docker-flags-20210310201637-6496 kubelet[3723]: Trace[1534937769]: ---"Objects listed" 10031ms (20:42:00.164)
	* Mar 10 20:42:47 docker-flags-20210310201637-6496 kubelet[3723]: Trace[1534937769]: [10.0319973s] [10.0319973s] END
	* Mar 10 20:43:17 docker-flags-20210310201637-6496 kubelet[3723]: W0310 20:43:17.721145    3723 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 20:43:19 docker-flags-20210310201637-6496 kubelet[3723]: W0310 20:43:19.303942    3723 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 20:43:19 docker-flags-20210310201637-6496 kubelet[3723]: I0310 20:43:19.545977    3723 trace.go:205] Trace[2100844500]: "iptables Monitor CANARY check" (10-Mar-2021 20:43:17.144) (total time: 2401ms):
	* Mar 10 20:43:19 docker-flags-20210310201637-6496 kubelet[3723]: Trace[2100844500]: [2.4010313s] [2.4010313s] END
	* Mar 10 20:43:43 docker-flags-20210310201637-6496 kubelet[3723]: E0310 20:43:43.920372    3723 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "docker-flags-20210310201637-6496": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/docker-flags-20210310201637-6496?resourceVersion=0&timeout=10s": context deadline exceeded
	* Mar 10 20:45:19 docker-flags-20210310201637-6496 kubelet[3723]: I0310 20:45:19.178389    3723 trace.go:205] Trace[575096367]: "iptables Monitor CANARY check" (10-Mar-2021 20:45:17.048) (total time: 2129ms):
	* Mar 10 20:45:19 docker-flags-20210310201637-6496 kubelet[3723]: Trace[575096367]: [2.1298881s] [2.1298881s] END
	* Mar 10 20:46:24 docker-flags-20210310201637-6496 kubelet[3723]: I0310 20:46:24.724785    3723 trace.go:205] Trace[99373887]: "iptables Monitor CANARY check" (10-Mar-2021 20:46:17.568) (total time: 7156ms):
	* Mar 10 20:46:24 docker-flags-20210310201637-6496 kubelet[3723]: Trace[99373887]: [7.1563276s] [7.1563276s] END
	* Mar 10 20:47:08 docker-flags-20210310201637-6496 kubelet[3723]: I0310 20:47:08.488730    3723 setters.go:577] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2021-03-10 20:47:08.4886157 +0000 UTC m=+1007.658599301 LastTransitionTime:2021-03-10 20:47:08.4886157 +0000 UTC m=+1007.658599301 Reason:KubeletNotReady Message:container runtime is down}
	* Mar 10 20:47:23 docker-flags-20210310201637-6496 kubelet[3723]: I0310 20:47:23.756470    3723 trace.go:205] Trace[98204434]: "iptables Monitor CANARY check" (10-Mar-2021 20:47:20.891) (total time: 2864ms):
	* Mar 10 20:47:23 docker-flags-20210310201637-6496 kubelet[3723]: Trace[98204434]: [2.8649434s] [2.8649434s] END
	* Mar 10 20:48:05 docker-flags-20210310201637-6496 kubelet[3723]: W0310 20:48:04.969218    3723 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-58x2b through plugin: invalid network status for
	* Mar 10 20:48:12 docker-flags-20210310201637-6496 kubelet[3723]: W0310 20:48:12.803000    3723 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 20:48:12 docker-flags-20210310201637-6496 kubelet[3723]: W0310 20:48:12.863010    3723 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 20:48:18 docker-flags-20210310201637-6496 kubelet[3723]: W0310 20:48:18.970338    3723 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-58x2b through plugin: invalid network status for
	* Mar 10 20:48:29 docker-flags-20210310201637-6496 kubelet[3723]: I0310 20:48:29.006597    3723 scope.go:95] [topologymanager] RemoveContainer - Container ID: 19359283e652a4b41ba05126216d5e625a6ad0e222ce4b6dbc2bd62fde15ebec
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| start   | -p                                       | multinode-20210310194323-6496-m03        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:59:27 GMT | Wed, 10 Mar 2021 20:02:27 GMT |
	|         | multinode-20210310194323-6496-m03        |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	| delete  | -p                                       | multinode-20210310194323-6496-m03        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:02:30 GMT | Wed, 10 Mar 2021 20:02:41 GMT |
	|         | multinode-20210310194323-6496-m03        |                                          |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:02:45 GMT | Wed, 10 Mar 2021 20:02:59 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:03:05 GMT | Wed, 10 Mar 2021 20:03:22 GMT |
	|         | multinode-20210310194323-6496            |                                          |                         |         |                               |                               |
	| start   | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:03:23 GMT | Wed, 10 Mar 2021 20:06:49 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |                         |         |                               |                               |
	|         | --wait=true --preload=false              |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0             |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:06:50 GMT | Wed, 10 Mar 2021 20:06:54 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | -- docker pull busybox                   |                                          |                         |         |                               |                               |
	| start   | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:06:54 GMT | Wed, 10 Mar 2021 20:08:51 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |                         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker         |                                          |                         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:08:51 GMT | Wed, 10 Mar 2021 20:08:54 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | -- docker images                         |                                          |                         |         |                               |                               |
	| delete  | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:08:54 GMT | Wed, 10 Mar 2021 20:09:05 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:09:06 GMT | Wed, 10 Mar 2021 20:11:51 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --memory=1900 --driver=docker            |                                          |                         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:52 GMT | Wed, 10 Mar 2021 20:11:54 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --schedule 5m                            |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:57 GMT | Wed, 10 Mar 2021 20:11:59 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | -- sudo systemctl show                   |                                          |                         |         |                               |                               |
	|         | minikube-scheduled-stop --no-page        |                                          |                         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:00 GMT | Wed, 10 Mar 2021 20:12:02 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --schedule 5s                            |                                          |                         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:26 GMT | Wed, 10 Mar 2021 20:12:35 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	| start   | -p                                       | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:37 GMT | Wed, 10 Mar 2021 20:15:24 GMT |
	|         | skaffold-20210310201235-6496             |                                          |                         |         |                               |                               |
	|         | --memory=2600 --driver=docker            |                                          |                         |         |                               |                               |
	| -p      | skaffold-20210310201235-6496             | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:15:28 GMT | Wed, 10 Mar 2021 20:15:41 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:15:46 GMT | Wed, 10 Mar 2021 20:15:57 GMT |
	|         | skaffold-20210310201235-6496             |                                          |                         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20210310201557-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:29 GMT | Wed, 10 Mar 2021 20:16:37 GMT |
	|         | insufficient-storage-20210310201557-6496 |                                          |                         |         |                               |                               |
	| delete  | -p pause-20210310201637-6496             | pause-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:24 GMT | Wed, 10 Mar 2021 20:32:49 GMT |
	| -p      | offline-docker-20210310201637-6496       | offline-docker-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:04 GMT | Wed, 10 Mar 2021 20:33:57 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | offline-docker-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:34:20 GMT | Wed, 10 Mar 2021 20:34:47 GMT |
	|         | offline-docker-20210310201637-6496       |                                          |                         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210310201637-6496   | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:39:52 GMT | Wed, 10 Mar 2021 20:40:10 GMT |
	|         | kubernetes-upgrade-20210310201637-6496   |                                          |                         |         |                               |                               |
	| start   | -p nospam-20210310201637-6496            | nospam-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:38 GMT | Wed, 10 Mar 2021 20:40:39 GMT |
	|         | -n=1 --memory=2250                       |                                          |                         |         |                               |                               |
	|         | --wait=false --driver=docker             |                                          |                         |         |                               |                               |
	| -p      | nospam-20210310201637-6496               | nospam-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:41:42 GMT | Wed, 10 Mar 2021 20:44:25 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p nospam-20210310201637-6496            | nospam-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:44:37 GMT | Wed, 10 Mar 2021 20:44:59 GMT |
	|---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 20:45:00
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 20:45:00.255205   12928 out.go:239] Setting OutFile to fd 1756 ...
	* I0310 20:45:00.257201   12928 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 20:45:00.257201   12928 out.go:252] Setting ErrFile to fd 1704...
	* I0310 20:45:00.257201   12928 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 20:45:00.274317   12928 out.go:246] Setting JSON to false
	* I0310 20:45:00.277206   12928 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":34566,"bootTime":1615374534,"procs":122,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 20:45:00.277206   12928 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 20:45:00.282208   12928 out.go:129] * [old-k8s-version-20210310204459-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 20:44:56.695722    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:57.216907    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:57.695019    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:58.201057    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:58.705627    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:59.202503    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:59.691383    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:00.205977    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:00.706532    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:01.201838    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:00.284207   12928 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 20:45:00.289221   12928 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 20:45:00.824356   12928 docker.go:119] docker version: linux-20.10.2
	* I0310 20:45:00.836917   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:45:01.866558   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0296417s)
	* I0310 20:45:01.869067   12928 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:true NGoroutines:91 SystemTime:2021-03-10 20:45:01.3861684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:45:01.873618   12928 out.go:129] * Using the docker driver based on user configuration
	* I0310 20:45:01.873897   12928 start.go:276] selected driver: docker
	* I0310 20:45:01.873897   12928 start.go:718] validating driver "docker" against <nil>
	* I0310 20:45:01.873897   12928 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 20:45:03.020868   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:45:04.040267   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0194001s)
	* I0310 20:45:04.040939   12928 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:107 OomKillDisable:true NGoroutines:91 SystemTime:2021-03-10 20:45:03.5743498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:45:04.041630   12928 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 20:45:04.041988   12928 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 20:45:04.042380   12928 cni.go:74] Creating CNI manager for ""
	* I0310 20:45:04.042380   12928 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 20:45:04.042380   12928 start_flags.go:398] config:
	* {Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 20:45:04.046348   12928 out.go:129] * Starting control plane node old-k8s-version-20210310204459-6496 in cluster old-k8s-version-20210310204459-6496
	* I0310 20:45:04.711858   12928 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 20:45:04.711858   12928 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 20:45:04.712630   12928 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	* I0310 20:45:04.712910   12928 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	* I0310 20:45:04.713093   12928 cache.go:54] Caching tarball of preloaded images
	* I0310 20:45:04.713093   12928 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 20:45:04.713312   12928 cache.go:57] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	* I0310 20:45:04.713312   12928 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json ...
	* I0310 20:45:04.713879   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json: {Name:mkb0c21784bf43313016b1fffce280513139bf15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:45:04.728555   12928 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 20:45:04.729352   12928 start.go:313] acquiring machines lock for old-k8s-version-20210310204459-6496: {Name:mk75b6b2b8c7e9551ee9b4fdfdcee0e639bfef0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:45:04.729624   12928 start.go:317] acquired machines lock for "old-k8s-version-20210310204459-6496" in 271.7??s
	* I0310 20:45:04.730175   12928 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	* I0310 20:45:04.730381   12928 start.go:126] createHost starting for "" (driver="docker")
	* I0310 20:45:04.732579   12928 out.go:150] * Creating docker container (CPUs=2, Memory=2200MB) ...
	* I0310 20:45:04.733599   12928 start.go:160] libmachine.API.Create for "old-k8s-version-20210310204459-6496" (driver="docker")
	* I0310 20:45:04.733599   12928 client.go:168] LocalClient.Create starting
	* I0310 20:45:04.734591   12928 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	* I0310 20:45:04.734591   12928 main.go:121] libmachine: Decoding PEM data...
	* I0310 20:45:04.734591   12928 main.go:121] libmachine: Parsing certificate...
	* I0310 20:45:04.734591   12928 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	* I0310 20:45:04.735594   12928 main.go:121] libmachine: Decoding PEM data...
	* I0310 20:45:04.735594   12928 main.go:121] libmachine: Parsing certificate...
	* I0310 20:45:04.764196   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 20:45:01.684381    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:02.186781    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:02.696042    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:03.194415    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:03.700992    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:04.190158    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:04.695427    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:05.198442    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:05.714591    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* W0310 20:45:05.343119   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 20:45:05.347432   12928 network_create.go:240] running [docker network inspect old-k8s-version-20210310204459-6496] to gather additional debugging logs...
	* I0310 20:45:05.347432   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496
	* W0310 20:45:05.940304   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 returned with exit code 1
	* I0310 20:45:05.940304   12928 network_create.go:243] error running [docker network inspect old-k8s-version-20210310204459-6496]: docker network inspect old-k8s-version-20210310204459-6496: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: old-k8s-version-20210310204459-6496
	* I0310 20:45:05.940304   12928 network_create.go:245] output of [docker network inspect old-k8s-version-20210310204459-6496]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: old-k8s-version-20210310204459-6496
	* 
	* ** /stderr **
	* I0310 20:45:05.948873   12928 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 20:45:06.646529   12928 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 20:45:06.647278   12928 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: old-k8s-version-20210310204459-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 20:45:06.654963   12928 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210310204459-6496
	* W0310 20:45:07.277860   12928 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210310204459-6496 returned with exit code 1
	* W0310 20:45:07.278667   12928 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	* I0310 20:45:07.298079   12928 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 20:45:07.911197   12928 cli_runner.go:115] Run: docker volume create old-k8s-version-20210310204459-6496 --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --label created_by.minikube.sigs.k8s.io=true
	* I0310 20:45:08.528283   12928 oci.go:102] Successfully created a docker volume old-k8s-version-20210310204459-6496
	* I0310 20:45:08.536913   12928 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20210310204459-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --entrypoint /usr/bin/test -v old-k8s-version-20210310204459-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 20:45:06.694675    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:07.204731    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:07.705665    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:08.205101    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:08.700491    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:09.192521    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:09.705780    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:10.192570    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:10.702418    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:11.196371    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:08.341903    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (30.6000593s)
	* I0310 20:45:08.341903    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 from cache
	* I0310 20:45:08.342156    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	* I0310 20:45:08.350044    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	* I0310 20:45:10.988019   21276 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m42.7021393s)
	* I0310 20:45:11.013289   21276 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	* I0310 20:45:11.130763   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 20:45:11.813937   21276 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 20:45:11.837929   21276 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 20:45:11.953618   21276 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 20:45:11.953618   21276 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 20:45:13.412101   12928 cli_runner.go:168] Completed: docker run --rm --name old-k8s-version-20210310204459-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --entrypoint /usr/bin/test -v old-k8s-version-20210310204459-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (4.8748804s)
	* I0310 20:45:13.412101   12928 oci.go:106] Successfully prepared a docker volume old-k8s-version-20210310204459-6496
	* I0310 20:45:13.412101   12928 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	* I0310 20:45:13.412101   12928 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	* I0310 20:45:13.412101   12928 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 20:45:13.420956   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:45:13.420956   12928 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210310204459-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* W0310 20:45:14.051843   12928 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210310204459-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	* I0310 20:45:14.052427   12928 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210310204459-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	* stdout:
	* 
	* stderr:
	* docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	* 
	* The notification platform is unavailable.
	* 	���
	* 
	* ���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	*    at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	* �������?8
	* CreateToastNotifier
	* Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	* Windows.UI.Notifications.ToastNotificationManager
	* Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	* ���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	* ���+The notification platform is unavailable.
	* 	������������RestrictedErrorReference
	* 	
���
���������RestrictedCapabilitySid
	* 	������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	* See 'docker run --help'.
	* I0310 20:45:14.432368   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0103571s)
	* I0310 20:45:14.432732   12928 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:109 OomKillDisable:true NGoroutines:94 SystemTime:2021-03-10 20:45:13.9415889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:45:14.442693   12928 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 20:45:11.695535    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:12.194937    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:12.703783    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:13.192468    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:13.698542    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:14.199015    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:14.714365    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:15.196816    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:15.690427    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:16.194198    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:14.587574    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (26.6305168s)
	* I0310 20:45:14.587574    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 from cache
	* I0310 20:45:14.587937    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 20:45:14.596952    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 20:45:15.468738   12928 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.0260466s)
	* I0310 20:45:15.479860   12928 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210310204459-6496 --name old-k8s-version-20210310204459-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --volume old-k8s-version-20210310204459-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 20:45:18.941811   12928 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210310204459-6496 --name old-k8s-version-20210310204459-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --volume old-k8s-version-20210310204459-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (3.4619553s)
	* I0310 20:45:18.951318   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format=
	* I0310 20:45:19.524049   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format=
	* I0310 20:45:16.695445    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:17.197269    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:17.708055    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:18.195592    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:18.701222    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:19.193439    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format=
	* I0310 20:45:20.600504    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format=: (1.4070667s)
	* I0310 20:45:20.600775    9740 logs.go:255] 1 containers: [cc5bf7d7971c]
	* I0310 20:45:20.615980    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format=
	* I0310 20:45:20.157679   12928 cli_runner.go:115] Run: docker exec old-k8s-version-20210310204459-6496 stat /var/lib/dpkg/alternatives/iptables
	* I0310 20:45:21.963622   12928 cli_runner.go:168] Completed: docker exec old-k8s-version-20210310204459-6496 stat /var/lib/dpkg/alternatives/iptables: (1.8059453s)
	* I0310 20:45:21.964004   12928 oci.go:278] the created container "old-k8s-version-20210310204459-6496" has a running status.
	* I0310 20:45:21.964004   12928 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa...
	* I0310 20:45:22.361837   12928 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 20:45:23.348600   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format=
	* I0310 20:45:23.958196   12928 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	* I0310 20:45:23.958196   12928 kic_runner.go:115] Args: [docker exec --privileged old-k8s-version-20210310204459-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	* I0310 20:45:21.664066    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format=: (1.0479358s)
	* I0310 20:45:21.664225    9740 logs.go:255] 1 containers: [d04b7875ec72]
	* I0310 20:45:21.676123    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format=
	* I0310 20:45:23.237959    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format=: (1.5618383s)
	* I0310 20:45:23.238104    9740 logs.go:255] 0 containers: []
	* W0310 20:45:23.238104    9740 logs.go:257] No container was found matching "coredns"
	* I0310 20:45:23.248892    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format=
	* I0310 20:45:25.435301    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format=: (2.1864127s)
	* I0310 20:45:25.435301    9740 logs.go:255] 1 containers: [adb946d74113]
	* I0310 20:45:25.454068    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 20:45:26.736715    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (18.3866955s)
	* I0310 20:45:26.737468    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 from cache
	* I0310 20:45:26.737468    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:45:26.748904    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:45:24.917870   12928 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa...
	* I0310 20:45:25.696877   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format=
	* I0310 20:45:26.334097   12928 machine.go:88] provisioning docker machine ...
	* I0310 20:45:26.334097   12928 ubuntu.go:169] provisioning hostname "old-k8s-version-20210310204459-6496"
	* I0310 20:45:26.345166   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:26.954556   12928 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:45:26.971298   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	* I0310 20:45:26.971298   12928 main.go:121] libmachine: About to run SSH command:
	* sudo hostname old-k8s-version-20210310204459-6496 && echo "old-k8s-version-20210310204459-6496" | sudo tee /etc/hostname
	* I0310 20:45:26.980262   12928 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 20:45:27.207756    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format=: (1.7536894s)
	* I0310 20:45:27.208044    9740 logs.go:255] 0 containers: []
	* W0310 20:45:27.208044    9740 logs.go:257] No container was found matching "kube-proxy"
	* I0310 20:45:27.217871    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 20:45:28.137842    9740 logs.go:255] 0 containers: []
	* W0310 20:45:28.138054    9740 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 20:45:28.147172    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=
	* I0310 20:45:29.350127    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format=: (1.2029559s)
	* I0310 20:45:29.350303    9740 logs.go:255] 0 containers: []
	* W0310 20:45:29.350303    9740 logs.go:257] No container was found matching "storage-provisioner"
	* I0310 20:45:29.359572    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format=
	* I0310 20:45:30.463150    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format=: (1.1025668s)
	* I0310 20:45:30.463288    9740 logs.go:255] 1 containers: [66d44e1d7560]
	* I0310 20:45:30.463288    9740 logs.go:122] Gathering logs for container status ...
	* I0310 20:45:30.463429    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 20:45:30.942198   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210310204459-6496
	* 
	* I0310 20:45:30.949618   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:31.551362   12928 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:45:31.551704   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	* I0310 20:45:31.551979   12928 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\sold-k8s-version-20210310204459-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210310204459-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 old-k8s-version-20210310204459-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 20:45:32.358914   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 20:45:32.358914   12928 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 20:45:32.358914   12928 ubuntu.go:177] setting up certificates
	* I0310 20:45:32.358914   12928 provision.go:83] configureAuth start
	* I0310 20:45:32.370381   12928 cli_runner.go:115] Run: docker container inspect -f "" old-k8s-version-20210310204459-6496
	* I0310 20:45:32.987615   12928 provision.go:137] copyHostCerts
	* I0310 20:45:32.988467   12928 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 20:45:32.988617   12928 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 20:45:32.988818   12928 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 20:45:32.994199   12928 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 20:45:32.994320   12928 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 20:45:32.994911   12928 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 20:45:33.002984   12928 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 20:45:33.003152   12928 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 20:45:33.003729   12928 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 20:45:33.006728   12928 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-20210310204459-6496 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210310204459-6496]
	* I0310 20:45:33.248434   12928 provision.go:165] copyRemoteCerts
	* I0310 20:45:33.258428   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 20:45:33.266434   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:33.881631   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	* I0310 20:45:34.527804   12928 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2693782s)
	* I0310 20:45:34.528542   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 20:45:31.791639    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.328072s)
	* I0310 20:45:31.793003    9740 logs.go:122] Gathering logs for dmesg ...
	* I0310 20:45:31.794771    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	* I0310 20:45:32.152567    9740 logs.go:122] Gathering logs for describe nodes ...
	* I0310 20:45:32.152567    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	* I0310 20:45:35.208075    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.055512s)
	* W0310 20:45:35.208193    9740 logs.go:129] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	* stdout:
	* 
	* stderr:
	* The connection to the server localhost:8443 was refused - did you specify the right host or port?
	*  output: 
	* ** stderr ** 
	* The connection to the server localhost:8443 was refused - did you specify the right host or port?
	* 
	* ** /stderr **
	* I0310 20:45:35.208568    9740 logs.go:122] Gathering logs for kube-apiserver [cc5bf7d7971c] ...
	* I0310 20:45:35.208568    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cc5bf7d7971c"
	* I0310 20:45:34.862013   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 20:45:35.103719   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1277 bytes)
	* I0310 20:45:35.279505   12928 provision.go:86] duration metric: configureAuth took 2.9205948s
	* I0310 20:45:35.279505   12928 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 20:45:35.296522   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:35.956818   12928 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:45:35.957808   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	* I0310 20:45:35.958075   12928 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 20:45:36.590941   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 20:45:36.590941   12928 ubuntu.go:71] root file system type: overlay
	* I0310 20:45:36.600870   12928 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 20:45:36.620128   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:37.213213   12928 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:45:37.214230   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	* I0310 20:45:37.214230   12928 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 20:45:38.141733   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 20:45:38.155124   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:38.755494   12928 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:45:38.756832   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	* I0310 20:45:38.756832   12928 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 20:45:37.880592    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 cc5bf7d7971c": (2.6720271s)
	* I0310 20:45:37.917053    9740 logs.go:122] Gathering logs for kube-controller-manager [66d44e1d7560] ...
	* I0310 20:45:37.918006    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 66d44e1d7560"
	* I0310 20:45:40.285234    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 66d44e1d7560": (2.3672309s)
	* I0310 20:45:40.286872    9740 logs.go:122] Gathering logs for Docker ...
	* I0310 20:45:40.287037    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	* I0310 20:45:40.517277    9740 logs.go:122] Gathering logs for kubelet ...
	* I0310 20:45:40.517456    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	* I0310 20:45:39.199048    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (24.6021279s)
	* I0310 20:45:39.199048    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 from cache
	* I0310 20:45:39.199048    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:45:39.210230    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:45:41.422297    9740 logs.go:122] Gathering logs for etcd [d04b7875ec72] ...
	* I0310 20:45:41.422297    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d04b7875ec72"
	* I0310 20:45:45.676953    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 d04b7875ec72": (4.2546607s)
	* I0310 20:45:45.709061    9740 logs.go:122] Gathering logs for kube-scheduler [adb946d74113] ...
	* I0310 20:45:45.709061    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 adb946d74113"
	* I0310 20:45:49.748208   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 20:45:38.114142000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 20:45:49.748208   12928 machine.go:91] provisioned docker machine in 23.4141415s
	* I0310 20:45:49.748208   12928 client.go:171] LocalClient.Create took 45.0146683s
	* I0310 20:45:49.748208   12928 start.go:168] duration metric: libmachine.API.Create for "old-k8s-version-20210310204459-6496" took 45.0146683s
	* I0310 20:45:49.748208   12928 start.go:267] post-start starting for "old-k8s-version-20210310204459-6496" (driver="docker")
	* I0310 20:45:49.748208   12928 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 20:45:49.758614   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 20:45:49.766365   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:50.110957    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 adb946d74113": (4.4019021s)
	* I0310 20:45:50.023275    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (23.2729983s)
	* I0310 20:45:50.023418    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 from cache
	* I0310 20:45:50.023418    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:45:50.036621    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:45:50.422276   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	* I0310 20:45:50.905494   12928 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.1466426s)
	* I0310 20:45:50.914491   12928 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 20:45:50.966715   12928 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 20:45:50.966715   12928 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 20:45:50.966715   12928 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 20:45:50.966715   12928 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 20:45:50.967000   12928 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 20:45:50.967712   12928 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 20:45:50.969334   12928 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 20:45:50.969334   12928 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 20:45:50.990144   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 20:45:51.083698   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 20:45:51.268611   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 20:45:51.511504   12928 start.go:270] post-start completed in 1.7632983s
	* I0310 20:45:51.550746   12928 cli_runner.go:115] Run: docker container inspect -f "" old-k8s-version-20210310204459-6496
	* I0310 20:45:52.155265   12928 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json ...
	* I0310 20:45:52.188530   12928 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 20:45:52.196673   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:52.867889   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	* I0310 20:45:53.353463   12928 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1647767s)
	* I0310 20:45:53.353463   12928 start.go:129] duration metric: createHost completed in 48.6231448s
	* I0310 20:45:53.354459   12928 start.go:80] releasing machines lock for "old-k8s-version-20210310204459-6496", held for 48.6235465s
	* I0310 20:45:53.362679   12928 cli_runner.go:115] Run: docker container inspect -f "" old-k8s-version-20210310204459-6496
	* I0310 20:45:53.939528   12928 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 20:45:53.949438   12928 ssh_runner.go:149] Run: systemctl --version
	* I0310 20:45:53.953944   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:53.956958   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:54.580367   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	* I0310 20:45:54.615840   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	* I0310 20:45:52.682538    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:55.371189    9740 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.6886539s)
	* I0310 20:45:55.371189    9740 api_server.go:68] duration metric: took 1m36.1976237s to wait for apiserver process to appear ...
	* I0310 20:45:55.371189    9740 api_server.go:84] waiting for apiserver healthz status ...
	* I0310 20:45:55.371189    9740 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55130/healthz ...
	* I0310 20:45:54.865625   12928 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 20:45:55.158095   12928 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.2181659s)
	* I0310 20:45:55.168864   12928 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 20:45:55.262746   12928 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 20:45:55.271819   12928 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 20:45:55.396624   12928 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 20:45:55.604086   12928 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 20:45:55.693004   12928 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 20:45:56.722346   12928 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0283425s)
	* I0310 20:45:56.732654   12928 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 20:45:56.839967   12928 ssh_runner.go:149] Run: docker version --format 
	* I0310 20:45:57.520716   12928 out.go:150] * Preparing Kubernetes v1.14.0 on Docker 20.10.3 ...
	* I0310 20:45:57.530110   12928 cli_runner.go:115] Run: docker exec -t old-k8s-version-20210310204459-6496 dig +short host.docker.internal
	* I0310 20:45:58.589310   12928 cli_runner.go:168] Completed: docker exec -t old-k8s-version-20210310204459-6496 dig +short host.docker.internal: (1.0590077s)
	* I0310 20:45:58.589761   12928 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 20:45:58.599975   12928 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 20:45:58.629255   12928 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 20:45:58.790284   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:59.375638   12928 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\client.crt
	* I0310 20:45:59.381369   12928 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\client.key
	* I0310 20:45:59.385304   12928 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	* I0310 20:45:59.385766   12928 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	* I0310 20:45:59.394606   12928 ssh_runner.go:149] Run: docker images --format :
	* I0310 20:45:59.071883    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (19.8616798s)
	* I0310 20:45:59.071883    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 from cache
	* I0310 20:45:59.071883    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 20:45:59.079230    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 20:45:59.844362   12928 docker.go:423] Got preloaded images: 
	* I0310 20:45:59.844362   12928 docker.go:429] k8s.gcr.io/kube-proxy:v1.14.0 wasn't preloaded
	* I0310 20:45:59.855645   12928 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 20:45:59.949427   12928 ssh_runner.go:149] Run: which lz4
	* I0310 20:46:00.005687   12928 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 20:46:00.075060   12928 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 20:46:00.075364   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (488333642 bytes)
	* I0310 20:47:10.617697   12928 docker.go:388] Took 70.619571 seconds to copy over tarball
	* I0310 20:47:10.639143   12928 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	* I0310 20:47:34.237148   10404 out.go:150]   - Generating certificates and keys ...
	* I0310 20:47:34.242767   10404 out.go:150]   - Booting up control plane ...
	* I0310 20:47:34.247785   10404 kubeadm.go:387] StartCluster complete in 10m58.6719909s
	* I0310 20:47:34.257795   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format=
	* I0310 20:47:55.383138   12928 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (44.7436307s)
	* I0310 20:47:55.383138   12928 ssh_runner.go:100] rm: /preloaded.tar.lz4
	* I0310 20:47:57.352441   12928 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 20:47:57.410825   12928 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3123 bytes)
	* I0310 20:47:57.642121   12928 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 20:47:59.081806   12928 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.4396865s)
	* I0310 20:47:59.084048   12928 ssh_runner.go:149] Run: sudo systemctl restart docker
	* I0310 20:48:13.716793   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format=: (39.4587585s)
	* I0310 20:48:13.716793   10404 logs.go:255] 1 containers: [9b71c60e312f]
	* I0310 20:48:13.722356   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format=
	* I0310 20:48:16.306081   12928 ssh_runner.go:189] Completed: sudo systemctl restart docker: (17.2218131s)
	* I0310 20:48:16.318733   12928 ssh_runner.go:149] Run: docker images --format :
	* I0310 20:48:17.207848   12928 docker.go:423] Got preloaded images: -- stdout --
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/kube-proxy:v1.14.0
	* k8s.gcr.io/kube-controller-manager:v1.14.0
	* k8s.gcr.io/kube-apiserver:v1.14.0
	* k8s.gcr.io/kube-scheduler:v1.14.0
	* k8s.gcr.io/coredns:1.3.1
	* k8s.gcr.io/etcd:3.3.10
	* k8s.gcr.io/pause:3.1
	* 
	* -- /stdout --
	* I0310 20:48:17.208570   12928 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 20:48:17.241235   12928 ssh_runner.go:149] Run: docker info --format 
	* I0310 20:48:19.442860   12928 ssh_runner.go:189] Completed: docker info --format : (2.2016277s)
	* I0310 20:48:19.443320   12928 cni.go:74] Creating CNI manager for ""
	* I0310 20:48:19.443320   12928 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 20:48:19.443320   12928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 20:48:19.443567   12928 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210310204459-6496 NodeName:old-k8s-version-20210310204459-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 20:48:19.443998   12928 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta1
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 172.17.0.3
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "old-k8s-version-20210310204459-6496"
	*   kubeletExtraArgs:
	*     node-ip: 172.17.0.3
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta1
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: old-k8s-version-20210310204459-6496
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       listen-metrics-urls: http://127.0.0.1:2381,http://172.17.0.3:2381
	* kubernetesVersion: v1.14.0
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 20:48:19.444438   12928 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20210310204459-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	* I0310 20:48:19.454962   12928 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	* I0310 20:48:19.534133   12928 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 20:48:19.543848   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 20:48:19.611003   12928 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	* I0310 20:48:23.203459   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format=: (9.4808233s)
	* I0310 20:48:23.203459   10404 logs.go:255] 1 containers: [11556200fc81]
	* I0310 20:48:23.218777   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format=
	* I0310 20:48:19.972107   12928 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 20:48:20.275076   12928 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1928 bytes)
	* I0310 20:48:20.500483   12928 ssh_runner.go:149] Run: grep 172.17.0.3	control-plane.minikube.internal$ /etc/hosts
	* I0310 20:48:20.541378   12928 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 20:48:20.668643   12928 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496 for IP: 172.17.0.3
	* I0310 20:48:20.668643   12928 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 20:48:20.668643   12928 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 20:48:20.668643   12928 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\client.key
	* I0310 20:48:20.668643   12928 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0
	* I0310 20:48:20.668643   12928 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
	* I0310 20:48:20.862651   12928 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0 ...
	* I0310 20:48:20.862651   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0: {Name:mk4d990127210c9e93f70bb2fa83fed3ed7d8272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:48:20.886181   12928 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0 ...
	* I0310 20:48:20.886181   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0: {Name:mk3b112be41963d8a84df37233731d1e05b06ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:48:20.895703   12928 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0 -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt
	* I0310 20:48:20.899122   12928 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0 -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key
	* I0310 20:48:20.906198   12928 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key
	* I0310 20:48:20.906198   12928 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt with IP's: []
	* I0310 20:48:21.063572   12928 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt ...
	* I0310 20:48:21.064582   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt: {Name:mkc85b22c9bece2080565bade554ebf8aae7c395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:48:21.073606   12928 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key ...
	* I0310 20:48:21.073606   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key: {Name:mkc400cbeb274a69f5d3aa3f494371d783186217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:48:21.085586   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 20:48:21.085586   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.085586   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 20:48:21.086578   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.086578   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 20:48:21.086578   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.086578   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 20:48:21.087590   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.087590   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 20:48:21.087590   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.087590   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 20:48:21.088583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.088583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 20:48:21.088583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.088583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 20:48:21.089583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.089583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 20:48:21.089583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.089583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 20:48:21.089583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.090584   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 20:48:21.090584   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.090584   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 20:48:21.090584   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.090584   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 20:48:21.091582   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.091582   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 20:48:21.091582   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.091582   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 20:48:21.092581   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.092581   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 20:48:21.092581   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.092581   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 20:48:21.093639   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.093639   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 20:48:21.093639   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.093639   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 20:48:21.093639   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.094657   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 20:48:21.094657   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.094657   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 20:48:21.094657   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.094657   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 20:48:21.095655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.095655   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 20:48:21.095655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.095655   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 20:48:21.096667   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.096667   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 20:48:21.096667   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.096667   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 20:48:21.097630   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.097630   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 20:48:21.097630   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.097630   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 20:48:21.098655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.098655   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 20:48:21.098655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.099460   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 20:48:21.099877   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 20:48:21.104844   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.105242   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 20:48:21.105585   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 20:48:21.105585   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 20:48:21.106404   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 20:48:21.117199   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 20:48:21.513740   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	* I0310 20:48:21.760642   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 20:48:22.037866   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	* I0310 20:48:22.744649   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 20:48:23.068283   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 20:48:23.437363   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 20:48:23.716689   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 20:48:24.144080   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 20:48:24.496544   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 20:48:24.706713   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 20:48:24.128431    9740 api_server.go:241] https://127.0.0.1:55130/healthz returned 403:
	* {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	* W0310 20:48:24.128706    9740 api_server.go:99] status: https://127.0.0.1:55130/healthz returned error 403:
	* {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	* I0310 20:48:24.638787    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format=
	* I0310 20:48:24.888872   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 20:48:25.072441   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 20:48:25.295478   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 20:48:25.539068   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 20:48:25.786296   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 20:48:25.971391   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 20:48:26.186445   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 20:48:26.360798   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 20:48:26.691872   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 20:48:26.874779   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 20:48:27.094519   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 20:48:27.926836   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 20:48:28.188557   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 20:48:28.394647   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 20:48:28.790329   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 20:48:29.094073   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 20:48:29.321739   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 20:48:29.607788   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 20:48:29.785678   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 20:48:30.244451   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format=: (7.025261s)
	* I0310 20:48:30.244451   10404 logs.go:255] 0 containers: []
	* W0310 20:48:30.244451   10404 logs.go:257] No container was found matching "coredns"
	* I0310 20:48:30.254013   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format=
	* I0310 20:48:30.032036   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 20:48:30.338918   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 20:48:30.567680   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 20:48:30.799494   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 20:48:30.986912   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 20:48:31.213564   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 20:48:31.533104   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 20:48:31.806558   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 20:48:32.025579   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 20:48:32.313142   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 20:48:32.820375   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 20:48:33.091682   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 20:48:33.361952   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 20:48:33.546662   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 20:48:33.880265   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 20:48:34.176826   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 20:48:34.443774   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 20:48:33.835176    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format=: (9.1964006s)
	* I0310 20:48:33.835431    9740 logs.go:255] 2 containers: [3d2c98ba1bfd cc5bf7d7971c]
	* I0310 20:48:33.843993    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format=
	* I0310 20:48:39.488216   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format=: (9.2336485s)
	* I0310 20:48:39.488216   10404 logs.go:255] 1 containers: [78ecb22163a7]
	* I0310 20:48:39.496142   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 20:48:34.820050   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 20:48:35.243486   12928 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 20:48:35.500540   12928 ssh_runner.go:149] Run: openssl version
	* I0310 20:48:35.574294   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 20:48:35.650674   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 20:48:35.681694   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 20:48:35.695254   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 20:48:35.748262   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:35.815809   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 20:48:35.910004   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 20:48:35.970953   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 20:48:35.986910   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 20:48:36.035569   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:36.169036   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 20:48:36.248538   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 20:48:36.281799   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 20:48:36.296020   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 20:48:36.339195   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:36.400862   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 20:48:36.510827   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 20:48:36.544150   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 20:48:36.554515   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 20:48:36.603970   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:36.661485   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 20:48:36.741293   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 20:48:36.771506   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 20:48:36.782867   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 20:48:36.831341   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:36.901844   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 20:48:36.960231   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 20:48:36.991039   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 20:48:37.001755   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 20:48:37.120470   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:37.219779   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 20:48:37.350451   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 20:48:37.374271   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 20:48:37.385332   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 20:48:37.493863   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:37.554705   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 20:48:37.655597   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 20:48:37.694141   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 20:48:37.710161   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 20:48:37.762947   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:37.827899   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 20:48:37.894174   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 20:48:37.915678   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 20:48:37.927030   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 20:48:37.967096   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:38.026740   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 20:48:38.090043   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 20:48:38.126200   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 20:48:38.136147   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 20:48:38.239371   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:38.331111   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 20:48:38.413542   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 20:48:38.442408   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 20:48:38.452607   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 20:48:38.499760   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:38.577229   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 20:48:38.641034   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 20:48:38.674442   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 20:48:38.690347   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 20:48:38.757770   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:38.864479   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 20:48:38.942637   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 20:48:39.028587   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 20:48:39.038752   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 20:48:39.107231   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:39.167667   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 20:48:39.232613   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 20:48:39.263457   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 20:48:39.273269   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 20:48:39.334847   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:39.402674   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 20:48:39.571265   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 20:48:39.604484   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 20:48:39.633603   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 20:48:39.805793   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:37.820712    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format=: (3.9764734s)
	* I0310 20:48:37.820804    9740 logs.go:255] 3 containers: [97de25fff1e2 d32313e5411d d04b7875ec72]
	* I0310 20:48:37.830843    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format=
	* I0310 20:48:39.880251   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 20:48:39.959855   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 20:48:40.024594   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 20:48:40.036081   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 20:48:40.106252   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:40.184730   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 20:48:40.351306   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 20:48:40.391835   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 20:48:40.401307   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 20:48:40.477301   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:40.545447   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 20:48:40.618259   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 20:48:40.658602   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 20:48:40.674461   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 20:48:40.736048   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:40.835411   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 20:48:40.908627   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 20:48:40.949747   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 20:48:40.967671   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 20:48:41.352067   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:41.417455   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 20:48:41.517072   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 20:48:41.546197   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 20:48:41.556596   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 20:48:41.613329   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:41.714299   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 20:48:41.825921   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 20:48:41.849980   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 20:48:41.864898   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 20:48:41.922840   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:42.017877   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 20:48:42.178610   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 20:48:42.222069   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 20:48:42.232994   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 20:48:42.311917   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:42.444203   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 20:48:42.551950   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 20:48:42.626121   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 20:48:42.637532   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 20:48:42.693402   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:42.768815   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 20:48:42.854401   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 20:48:42.900184   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 20:48:42.908801   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 20:48:43.052775   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:43.262207   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 20:48:43.395161   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 20:48:43.457844   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 20:48:43.478189   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 20:48:43.562538   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:43.709162   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 20:48:43.876180   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 20:48:43.943286   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 20:48:43.962138   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 20:48:44.029379   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:44.217921   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 20:48:44.301845   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 20:48:44.376495   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 20:48:44.387476   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 20:48:44.456446   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:44.592511   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 20:48:44.700759   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 20:48:44.738480   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 20:48:44.748403   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 20:48:44.805136   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:44.208737    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format=: (6.3777058s)
	* I0310 20:48:44.208873    9740 logs.go:255] 0 containers: []
	* W0310 20:48:44.208873    9740 logs.go:257] No container was found matching "coredns"
	* I0310 20:48:44.219708    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format=
	* I0310 20:48:47.294457   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format=: (7.7983256s)
	* I0310 20:48:47.294457   10404 logs.go:255] 0 containers: []
	* W0310 20:48:47.294457   10404 logs.go:257] No container was found matching "kube-proxy"
	* I0310 20:48:47.302859   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 20:48:44.928726   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 20:48:45.044469   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:48:45.095289   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:48:45.113350   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:48:45.186929   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 20:48:45.321146   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 20:48:45.445664   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 20:48:45.482187   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 20:48:45.502749   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 20:48:45.554720   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:45.629427   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 20:48:45.697187   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 20:48:45.727043   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 20:48:45.738673   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 20:48:45.808330   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:45.893296   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 20:48:45.982785   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 20:48:46.020660   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 20:48:46.035740   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 20:48:46.097117   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:46.169019   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 20:48:46.291358   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 20:48:46.386018   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 20:48:46.420499   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 20:48:46.475764   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:46.630846   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 20:48:46.703689   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 20:48:46.735499   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 20:48:46.754694   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 20:48:46.814457   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:46.903052   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 20:48:46.969551   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 20:48:47.014846   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 20:48:47.025370   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 20:48:47.103567   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:47.194455   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 20:48:47.297186   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 20:48:47.356622   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 20:48:47.360289   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 20:48:47.460715   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:47.546824   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 20:48:47.627035   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 20:48:47.662210   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 20:48:47.673768   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 20:48:47.749535   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:47.806523   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 20:48:47.961579   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 20:48:47.999154   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 20:48:48.008851   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 20:48:48.071015   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:48.172030   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 20:48:48.248251   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 20:48:48.281154   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 20:48:48.296457   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 20:48:48.348775   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:48.439206   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 20:48:48.554057   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 20:48:48.618046   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 20:48:48.632735   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 20:48:48.683261   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:48.768887   12928 kubeadm.go:385] StartCluster: {Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 20:48:48.775847   12928 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 20:48:49.462903   12928 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 20:48:49.542762   12928 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 20:48:49.646072   12928 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 20:48:49.654944   12928 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 20:48:49.767382   12928 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 20:48:49.767628   12928 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 20:48:51.217661    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format=: (6.9978193s)
	* I0310 20:48:51.217807    9740 logs.go:255] 2 containers: [a45e8b20db73 adb946d74113]
	* I0310 20:48:51.230353    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 20:48:53.196431   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=: (5.8935798s)
	* I0310 20:48:53.196431   10404 logs.go:255] 0 containers: []
	* W0310 20:48:53.196431   10404 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 20:48:53.205549   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=
	* I0310 20:48:53.407075    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format=: (2.1767246s)
	* I0310 20:48:53.407075    9740 logs.go:255] 0 containers: []
	* W0310 20:48:53.407075    9740 logs.go:257] No container was found matching "kube-proxy"
	* I0310 20:48:53.412664    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 20:48:57.371554   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format=: (4.1655837s)
	* I0310 20:48:57.371554   10404 logs.go:255] 0 containers: []
	* W0310 20:48:57.371554   10404 logs.go:257] No container was found matching "storage-provisioner"
	* I0310 20:48:57.378971   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format=
	* I0310 20:48:56.924709    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=: (3.5120496s)
	* I0310 20:48:56.924709    9740 logs.go:255] 0 containers: []
	* W0310 20:48:56.924709    9740 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 20:48:56.935522    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:48:44.531359    9416 out.go:340] unable to execute * 2021-03-10 20:47:25.795335 W | etcdserver: request "header:<ID:3266086224249882857 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.7\" mod_revision:812 > success:<request_put:<key:\"/registry/masterleases/172.17.0.7\" value_size:65 lease:3266086224249882855 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.7\" > >>" with result "size:16" took too long (542.8872ms) to execute
	: html/template:* 2021-03-10 20:47:25.795335 W | etcdserver: request "header:<ID:3266086224249882857 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.7\" mod_revision:812 > success:<request_put:<key:\"/registry/masterleases/172.17.0.7\" value_size:65 lease:3266086224249882855 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.7\" > >>" with result "size:16" took too long (542.8872ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:48:44.568580    9416 out.go:340] unable to execute * 2021-03-10 20:47:32.354956 W | etcdserver: request "header:<ID:3266086224249882894 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0\" mod_revision:823 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0\" value_size:839 lease:3266086224249882787 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0\" > >>" with result "size:16" took too long (159.287ms) to execute
	: html/template:* 2021-03-10 20:47:32.354956 W | etcdserver: request "header:<ID:3266086224249882894 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0\" mod_revision:823 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0\" value_size:839 lease:3266086224249882787 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-docker-flags-20210310201637-6496.166b1570321b7ef0\" > >>" with result "size:16" took too long (159.287ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:48:58.072551    9416 out.go:335] unable to parse "* I0310 20:45:00.836917   12928 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:45:00.836917   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:48:58.080518    9416 out.go:335] unable to parse "* I0310 20:45:01.866558   12928 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0296417s)\n": template: * I0310 20:45:01.866558   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0296417s)
	:1: function "json" not defined - returning raw string.
	E0310 20:48:58.102777    9416 out.go:335] unable to parse "* I0310 20:45:03.020868   12928 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:45:03.020868   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:48:58.110444    9416 out.go:335] unable to parse "* I0310 20:45:04.040267   12928 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0194001s)\n": template: * I0310 20:45:04.040267   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0194001s)
	:1: function "json" not defined - returning raw string.
	E0310 20:48:58.244833    9416 out.go:340] unable to execute * I0310 20:45:04.764196   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 20:45:04.764196   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:291: executing "* I0310 20:45:04.764196   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:48:58.275821    9416 out.go:340] unable to execute * W0310 20:45:05.343119   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 20:45:05.343119   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:286: executing "* W0310 20:45:05.343119   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:48:58.349288    9416 out.go:340] unable to execute * I0310 20:45:05.948873   12928 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 20:45:05.948873   12928 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 20:45:05.948873   12928 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:48:58.523802    9416 out.go:335] unable to parse "* I0310 20:45:13.420956   12928 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:45:13.420956   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:48:58.799248    9416 out.go:335] unable to parse "* I0310 20:45:14.432368   12928 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0103571s)\n": template: * I0310 20:45:14.432368   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0103571s)
	:1: function "json" not defined - returning raw string.
	E0310 20:48:58.816655    9416 out.go:335] unable to parse "* I0310 20:45:14.442693   12928 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 20:45:14.442693   12928 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 20:48:58.878878    9416 out.go:335] unable to parse "* I0310 20:45:15.468738   12928 cli_runner.go:168] Completed: docker info --format \"'{{json .SecurityOptions}}'\": (1.0260466s)\n": template: * I0310 20:45:15.468738   12928 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.0260466s)
	:1: function "json" not defined - returning raw string.
	E0310 20:48:59.030166    9416 out.go:340] unable to execute * I0310 20:45:26.345166   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:26.345166   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:26.345166   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:48:59.044453    9416 out.go:335] unable to parse "* I0310 20:45:26.971298   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}\n": template: * I0310 20:45:26.971298   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:48:59.142495    9416 out.go:340] unable to execute * I0310 20:45:30.949618   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:30.949618   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:30.949618   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:48:59.165777    9416 out.go:335] unable to parse "* I0310 20:45:31.551704   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}\n": template: * I0310 20:45:31.551704   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:48:59.279733    9416 out.go:340] unable to execute * I0310 20:45:33.266434   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:33.266434   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:33.266434   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:48:59.362348    9416 out.go:340] unable to execute * I0310 20:45:35.296522   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:35.296522   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:35.296522   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:48:59.391539    9416 out.go:335] unable to parse "* I0310 20:45:35.957808   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}\n": template: * I0310 20:45:35.957808   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:48:59.413752    9416 out.go:340] unable to execute * I0310 20:45:36.620128   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:36.620128   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:36.620128   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:48:59.427480    9416 out.go:335] unable to parse "* I0310 20:45:37.214230   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}\n": template: * I0310 20:45:37.214230   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:48:59.836575    9416 out.go:340] unable to execute * I0310 20:45:38.155124   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:38.155124   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:38.155124   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:48:59.847581    9416 out.go:335] unable to parse "* I0310 20:45:38.756832   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}\n": template: * I0310 20:45:38.756832   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:49:00.222827    9416 out.go:340] unable to execute * I0310 20:45:49.766365   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:49.766365   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:49.766365   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:00.356706    9416 out.go:340] unable to execute * I0310 20:45:52.196673   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:52.196673   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:52.196673   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:00.386669    9416 out.go:340] unable to execute * I0310 20:45:53.953944   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:53.953944   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:53.953944   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:00.404269    9416 out.go:340] unable to execute * I0310 20:45:53.956958   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:53.956958   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:53.956958   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:00.530215    9416 out.go:340] unable to execute * I0310 20:45:58.790284   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:58.790284   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:58.790284   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p docker-flags-20210310201637-6496 -n docker-flags-20210310201637-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p docker-flags-20210310201637-6496 -n docker-flags-20210310201637-6496: (15.0643551s)
helpers_test.go:257: (dbg) Run:  kubectl --context docker-flags-20210310201637-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:257: (dbg) Done: kubectl --context docker-flags-20210310201637-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.1228022s)
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestDockerFlags]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context docker-flags-20210310201637-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context docker-flags-20210310201637-6496 describe pod : exit status 1 (212.7639ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context docker-flags-20210310201637-6496 describe pod : exit status 1
helpers_test.go:171: Cleaning up "docker-flags-20210310201637-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20210310201637-6496

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20210310201637-6496: (26.8286993s)
--- FAIL: TestDockerFlags (1989.98s)

                                                
                                    
x
+
TestForceSystemdFlag (1189.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20210310203447-6496 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-20210310203447-6496 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 109 (18m14.0804707s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20210310203447-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	
	
	* Starting control plane node force-systemd-flag-20210310203447-6496 in cluster force-systemd-flag-20210310203447-6496
	* Creating docker container (CPUs=2, Memory=1800MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:34:48.152716   21276 out.go:239] Setting OutFile to fd 3048 ...
	I0310 20:34:48.153723   21276 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:34:48.153723   21276 out.go:252] Setting ErrFile to fd 2904...
	I0310 20:34:48.153723   21276 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:34:48.165719   21276 out.go:246] Setting JSON to false
	I0310 20:34:48.169762   21276 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":33954,"bootTime":1615374534,"procs":121,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:34:48.170789   21276 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:34:48.178136   21276 out.go:129] * [force-systemd-flag-20210310203447-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:34:48.186382   21276 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:34:48.192007   21276 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:34:48.821624   21276 docker.go:119] docker version: linux-20.10.2
	I0310 20:34:48.837493   21276 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:34:50.892196   21276 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (2.0541132s)
	I0310 20:34:50.894213   21276 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:107 OomKillDisable:true NGoroutines:94 SystemTime:2021-03-10 20:34:50.3528222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:34:50.905611   21276 out.go:129] * Using the docker driver based on user configuration
	I0310 20:34:50.906411   21276 start.go:276] selected driver: docker
	I0310 20:34:50.906837   21276 start.go:718] validating driver "docker" against <nil>
	I0310 20:34:50.906837   21276 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:34:52.943077   21276 out.go:129] 
	W0310 20:34:52.943701   21276 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	W0310 20:34:52.944200   21276 out.go:191] * Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	* Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0310 20:34:52.944428   21276 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* Documentation: https://docs.docker.com/docker-for-windows/#resources
	I0310 20:34:52.954975   21276 out.go:129] 
	I0310 20:34:52.988478   21276 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:34:54.146851   21276 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.158033s)
	I0310 20:34:54.147224   21276 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:103 OomKillDisable:true NGoroutines:87 SystemTime:2021-03-10 20:34:53.5948484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:34:54.147519   21276 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 20:34:54.147519   21276 start_flags.go:699] Wait components to verify : map[apiserver:true system_pods:true]
	I0310 20:34:54.147519   21276 cni.go:74] Creating CNI manager for ""
	I0310 20:34:54.147519   21276 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:34:54.147519   21276 start_flags.go:398] config:
	{Name:force-systemd-flag-20210310203447-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:force-systemd-flag-20210310203447-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:34:54.147519   21276 out.go:129] * Starting control plane node force-systemd-flag-20210310203447-6496 in cluster force-systemd-flag-20210310203447-6496
	I0310 20:34:54.895807   21276 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:34:54.896140   21276 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:34:54.896728   21276 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:34:54.897378   21276 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:34:54.897727   21276 cache.go:54] Caching tarball of preloaded images
	I0310 20:34:54.898855   21276 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 20:34:54.899062   21276 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 20:34:54.900191   21276 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\config.json ...
	I0310 20:34:54.901039   21276 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\config.json: {Name:mkfe99ccf89724881556cea92dbeee352cd23dd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:34:54.916749   21276 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:34:54.917479   21276 start.go:313] acquiring machines lock for force-systemd-flag-20210310203447-6496: {Name:mk497b8cc4b0963fb144a61b1d326aaf563a6c75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:34:54.917850   21276 start.go:317] acquired machines lock for "force-systemd-flag-20210310203447-6496" in 196.4??s
	I0310 20:34:54.918080   21276 start.go:89] Provisioning new machine with config: &{Name:force-systemd-flag-20210310203447-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:force-systemd-flag-20210310203447-6496 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 20:34:54.918454   21276 start.go:126] createHost starting for "" (driver="docker")
	I0310 20:34:54.928932   21276 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	I0310 20:34:54.930519   21276 start.go:160] libmachine.API.Create for "force-systemd-flag-20210310203447-6496" (driver="docker")
	I0310 20:34:54.931128   21276 client.go:168] LocalClient.Create starting
	I0310 20:34:54.931719   21276 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 20:34:54.932927   21276 main.go:121] libmachine: Decoding PEM data...
	I0310 20:34:54.932927   21276 main.go:121] libmachine: Parsing certificate...
	I0310 20:34:54.933745   21276 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 20:34:54.933745   21276 main.go:121] libmachine: Decoding PEM data...
	I0310 20:34:54.933745   21276 main.go:121] libmachine: Parsing certificate...
	I0310 20:34:54.963980   21276 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20210310203447-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 20:34:55.593615   21276 cli_runner.go:162] docker network inspect force-systemd-flag-20210310203447-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 20:34:55.606264   21276 network_create.go:240] running [docker network inspect force-systemd-flag-20210310203447-6496] to gather additional debugging logs...
	I0310 20:34:55.607264   21276 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20210310203447-6496
	W0310 20:34:56.259304   21276 cli_runner.go:162] docker network inspect force-systemd-flag-20210310203447-6496 returned with exit code 1
	I0310 20:34:56.259304   21276 network_create.go:243] error running [docker network inspect force-systemd-flag-20210310203447-6496]: docker network inspect force-systemd-flag-20210310203447-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20210310203447-6496
	I0310 20:34:56.259546   21276 network_create.go:245] output of [docker network inspect force-systemd-flag-20210310203447-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20210310203447-6496
	
	** /stderr **
	I0310 20:34:56.267650   21276 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:34:56.979285   21276 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 20:34:56.979285   21276 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: force-systemd-flag-20210310203447-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 20:34:56.986936   21276 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20210310203447-6496
	W0310 20:34:57.599551   21276 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20210310203447-6496 returned with exit code 1
	W0310 20:34:57.600348   21276 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 20:34:57.620404   21276 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 20:34:58.315725   21276 cli_runner.go:115] Run: docker volume create force-systemd-flag-20210310203447-6496 --label name.minikube.sigs.k8s.io=force-systemd-flag-20210310203447-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 20:34:58.927684   21276 oci.go:102] Successfully created a docker volume force-systemd-flag-20210310203447-6496
	I0310 20:34:58.949100   21276 cli_runner.go:115] Run: docker run --rm --name force-systemd-flag-20210310203447-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20210310203447-6496 --entrypoint /usr/bin/test -v force-systemd-flag-20210310203447-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 20:35:03.970514   21276 cli_runner.go:168] Completed: docker run --rm --name force-systemd-flag-20210310203447-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20210310203447-6496 --entrypoint /usr/bin/test -v force-systemd-flag-20210310203447-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (5.0210718s)
	I0310 20:35:03.970956   21276 oci.go:106] Successfully prepared a docker volume force-systemd-flag-20210310203447-6496
	I0310 20:35:03.971311   21276 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:35:03.971823   21276 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:35:03.971823   21276 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 20:35:03.980548   21276 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:35:03.982259   21276 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20210310203447-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	W0310 20:35:04.688065   21276 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20210310203447-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 20:35:04.688445   21276 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20210310203447-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 20:35:04.978659   21276 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:108 OomKillDisable:true NGoroutines:94 SystemTime:2021-03-10 20:35:04.5437442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:35:04.990594   21276 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 20:35:05.975798   21276 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-20210310203447-6496 --name force-systemd-flag-20210310203447-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20210310203447-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-20210310203447-6496 --volume force-systemd-flag-20210310203447-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 20:35:34.865904   21276 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-20210310203447-6496 --name force-systemd-flag-20210310203447-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20210310203447-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-20210310203447-6496 --volume force-systemd-flag-20210310203447-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (28.8901554s)
	I0310 20:35:34.873431   21276 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20210310203447-6496 --format={{.State.Running}}
	I0310 20:35:35.525483   21276 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20210310203447-6496 --format={{.State.Status}}
	I0310 20:35:36.127451   21276 cli_runner.go:115] Run: docker exec force-systemd-flag-20210310203447-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 20:35:37.217774   21276 cli_runner.go:168] Completed: docker exec force-systemd-flag-20210310203447-6496 stat /var/lib/dpkg/alternatives/iptables: (1.0903248s)
	I0310 20:35:37.217774   21276 oci.go:278] the created container "force-systemd-flag-20210310203447-6496" has a running status.
	I0310 20:35:37.218132   21276 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\force-systemd-flag-20210310203447-6496\id_rsa...
	I0310 20:35:37.648858   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\force-systemd-flag-20210310203447-6496\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0310 20:35:37.675451   21276 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\force-systemd-flag-20210310203447-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 20:35:38.695404   21276 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20210310203447-6496 --format={{.State.Status}}
	I0310 20:35:39.320477   21276 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 20:35:39.320802   21276 kic_runner.go:115] Args: [docker exec --privileged force-systemd-flag-20210310203447-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 20:35:40.508252   21276 kic_runner.go:124] Done: [docker exec --privileged force-systemd-flag-20210310203447-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.1874512s)
	I0310 20:35:40.508882   21276 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\force-systemd-flag-20210310203447-6496\id_rsa...
	I0310 20:35:41.275710   21276 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20210310203447-6496 --format={{.State.Status}}
	I0310 20:35:41.889954   21276 machine.go:88] provisioning docker machine ...
	I0310 20:35:41.890701   21276 ubuntu.go:169] provisioning hostname "force-systemd-flag-20210310203447-6496"
	I0310 20:35:41.900557   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:35:42.540724   21276 main.go:121] libmachine: Using SSH client type: native
	I0310 20:35:42.545615   21276 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55128 <nil> <nil>}
	I0310 20:35:42.545615   21276 main.go:121] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-20210310203447-6496 && echo "force-systemd-flag-20210310203447-6496" | sudo tee /etc/hostname
	I0310 20:35:43.744490   21276 main.go:121] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-20210310203447-6496
	
	I0310 20:35:43.744937   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:35:44.386637   21276 main.go:121] libmachine: Using SSH client type: native
	I0310 20:35:44.387048   21276 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55128 <nil> <nil>}
	I0310 20:35:44.387048   21276 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-20210310203447-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-20210310203447-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-20210310203447-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:35:44.952480   21276 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:35:44.952480   21276 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:35:44.952480   21276 ubuntu.go:177] setting up certificates
	I0310 20:35:44.952480   21276 provision.go:83] configureAuth start
	I0310 20:35:44.968262   21276 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20210310203447-6496
	I0310 20:35:45.589983   21276 provision.go:137] copyHostCerts
	I0310 20:35:45.590465   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> C:\Users\jenkins\.minikube/ca.pem
	I0310 20:35:45.590874   21276 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:35:45.590874   21276 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:35:45.591294   21276 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:35:45.594578   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\cert.pem -> C:\Users\jenkins\.minikube/cert.pem
	I0310 20:35:45.594578   21276 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:35:45.594578   21276 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:35:45.594578   21276 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:35:45.594578   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\key.pem -> C:\Users\jenkins\.minikube/key.pem
	I0310 20:35:45.594578   21276 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:35:45.594578   21276 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:35:45.594578   21276 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:35:45.594578   21276 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-20210310203447-6496 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-flag-20210310203447-6496]
	I0310 20:35:45.755174   21276 provision.go:165] copyRemoteCerts
	I0310 20:35:45.765113   21276 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:35:45.772295   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:35:46.379636   21276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55128 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-flag-20210310203447-6496\id_rsa Username:docker}
	I0310 20:35:46.857523   21276 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.0924116s)
	I0310 20:35:46.857743   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0310 20:35:46.858314   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:35:47.270091   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0310 20:35:47.278561   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1285 bytes)
	I0310 20:35:47.674514   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0310 20:35:47.675678   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 20:35:48.212392   21276 provision.go:86] duration metric: configureAuth took 3.2591376s
	I0310 20:35:48.212392   21276 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:35:48.222196   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:35:48.859538   21276 main.go:121] libmachine: Using SSH client type: native
	I0310 20:35:48.859538   21276 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55128 <nil> <nil>}
	I0310 20:35:48.859538   21276 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:35:49.531501   21276 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:35:49.531501   21276 ubuntu.go:71] root file system type: overlay
	I0310 20:35:49.532446   21276 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:35:49.541881   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:35:50.224792   21276 main.go:121] libmachine: Using SSH client type: native
	I0310 20:35:50.225343   21276 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55128 <nil> <nil>}
	I0310 20:35:50.225627   21276 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:35:51.015913   21276 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:35:51.023904   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:35:51.677291   21276 main.go:121] libmachine: Using SSH client type: native
	I0310 20:35:51.678140   21276 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55128 <nil> <nil>}
	I0310 20:35:51.678140   21276 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:36:00.412701   21276 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 20:35:50.998177000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 20:36:00.412701   21276 machine.go:91] provisioned docker machine in 18.5227768s
	I0310 20:36:00.412701   21276 client.go:171] LocalClient.Create took 1m5.4816821s
	I0310 20:36:00.413075   21276 start.go:168] duration metric: libmachine.API.Create for "force-systemd-flag-20210310203447-6496" took 1m5.4826655s
	I0310 20:36:00.413075   21276 start.go:267] post-start starting for "force-systemd-flag-20210310203447-6496" (driver="docker")
	I0310 20:36:00.413075   21276 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:36:00.421715   21276 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:36:00.429551   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:36:01.067504   21276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55128 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-flag-20210310203447-6496\id_rsa Username:docker}
	I0310 20:36:01.400645   21276 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:36:01.430749   21276 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:36:01.430749   21276 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:36:01.430749   21276 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:36:01.430749   21276 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:36:01.430997   21276 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:36:01.431550   21276 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:36:01.434442   21276 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:36:01.434442   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> /etc/test/nested/copy/2512/hosts
	I0310 20:36:01.435832   21276 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:36:01.435832   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> /etc/test/nested/copy/4452/hosts
	I0310 20:36:01.449899   21276 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:36:01.563549   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:36:01.809112   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:36:02.019187   21276 start.go:270] post-start completed in 1.6061145s
	I0310 20:36:02.058103   21276 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20210310203447-6496
	I0310 20:36:02.648361   21276 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\config.json ...
	I0310 20:36:02.686171   21276 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:36:02.698746   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:36:03.322512   21276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55128 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-flag-20210310203447-6496\id_rsa Username:docker}
	I0310 20:36:03.553393   21276 start.go:129] duration metric: createHost completed in 1m8.6350539s
	I0310 20:36:03.553662   21276 start.go:80] releasing machines lock for "force-systemd-flag-20210310203447-6496", held for 1m8.6356955s
	I0310 20:36:03.570016   21276 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-20210310203447-6496
	I0310 20:36:04.133521   21276 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:36:04.144001   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:36:04.144745   21276 ssh_runner.go:149] Run: systemctl --version
	I0310 20:36:04.152415   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:36:04.785820   21276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55128 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-flag-20210310203447-6496\id_rsa Username:docker}
	I0310 20:36:04.831902   21276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55128 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-flag-20210310203447-6496\id_rsa Username:docker}
	I0310 20:36:05.187006   21276 ssh_runner.go:189] Completed: systemctl --version: (1.0422626s)
	I0310 20:36:05.196985   21276 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:36:05.392280   21276 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:36:05.732554   21276 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:36:05.742798   21276 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:36:05.743965   21276 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6103026s)
	I0310 20:36:05.813560   21276 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:36:06.015596   21276 docker.go:329] Forcing docker to use systemd as cgroup manager...
	I0310 20:36:06.015697   21276 ssh_runner.go:316] scp memory --> /etc/docker/daemon.json (143 bytes)
	I0310 20:36:06.213815   21276 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:36:07.280895   21276 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0670818s)
	I0310 20:36:07.289579   21276 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:36:12.510210   21276 ssh_runner.go:189] Completed: sudo systemctl restart docker: (5.2196302s)
	I0310 20:36:12.519012   21276 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:36:13.518304   21276 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 20:36:13.527333   21276 cli_runner.go:115] Run: docker exec -t force-systemd-flag-20210310203447-6496 dig +short host.docker.internal
	I0310 20:36:15.202985   21276 cli_runner.go:168] Completed: docker exec -t force-systemd-flag-20210310203447-6496 dig +short host.docker.internal: (1.6756554s)
	I0310 20:36:15.202985   21276 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:36:15.218118   21276 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:36:15.244575   21276 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:36:15.369085   21276 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-flag-20210310203447-6496
	I0310 20:36:16.001001   21276 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\client.crt
	I0310 20:36:16.006908   21276 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\client.key
	I0310 20:36:16.010681   21276 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:36:16.011541   21276 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:36:16.031018   21276 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:36:16.896242   21276 docker.go:423] Got preloaded images: 
	I0310 20:36:16.896562   21276 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 20:36:16.907463   21276 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:36:17.054379   21276 ssh_runner.go:149] Run: which lz4
	I0310 20:36:17.099130   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0310 20:36:17.111211   21276 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 20:36:17.166228   21276 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 20:36:17.166541   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 20:37:25.304431   21276 docker.go:388] Took 68.204363 seconds to copy over tarball
	I0310 20:37:25.315586   21276 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 20:38:03.055684   21276 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (37.7401539s)
	I0310 20:38:03.056124   21276 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 20:38:05.005917   21276 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:38:05.089337   21276 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 20:38:05.375555   21276 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:38:06.802540   21276 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.4269871s)
	I0310 20:38:06.811785   21276 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:38:13.046511   21276 ssh_runner.go:189] Completed: sudo systemctl restart docker: (6.2347357s)
	I0310 20:38:13.060626   21276 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:38:14.108072   21276 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.047448s)
	I0310 20:38:14.108072   21276 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 20:38:14.108674   21276 cache_images.go:73] Images are preloaded, skipping loading
	I0310 20:38:14.116368   21276 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 20:38:16.234856   21276 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (2.1184909s)
	I0310 20:38:16.234856   21276 cni.go:74] Creating CNI manager for ""
	I0310 20:38:16.234856   21276 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:38:16.234856   21276 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 20:38:16.234856   21276 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.4 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-20210310203447-6496 NodeName:force-systemd-flag-20210310203447-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.4 CgroupDriver:systemd ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 20:38:16.235838   21276 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "force-systemd-flag-20210310203447-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 20:38:16.235838   21276 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=force-systemd-flag-20210310203447-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:force-systemd-flag-20210310203447-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 20:38:16.245700   21276 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 20:38:16.312343   21276 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 20:38:16.324956   21276 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 20:38:16.385453   21276 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0310 20:38:16.614583   21276 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 20:38:16.819306   21276 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1863 bytes)
	I0310 20:38:16.973516   21276 ssh_runner.go:149] Run: grep 172.17.0.4	control-plane.minikube.internal$ /etc/hosts
	I0310 20:38:17.005084   21276 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:38:17.176821   21276 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496 for IP: 172.17.0.4
	I0310 20:38:17.177904   21276 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 20:38:17.178416   21276 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 20:38:17.179458   21276 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\client.key
	I0310 20:38:17.179857   21276 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.key.fb01c024
	I0310 20:38:17.179857   21276 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.crt.fb01c024 with IP's: [172.17.0.4 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 20:38:17.463538   21276 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.crt.fb01c024 ...
	I0310 20:38:17.463538   21276 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.crt.fb01c024: {Name:mk3310cb6bb00e570b8fd0d108f4f974398f77ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:38:17.480456   21276 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.key.fb01c024 ...
	I0310 20:38:17.480456   21276 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.key.fb01c024: {Name:mk1f6be6c6c3fa1ef517b057cfd453fa9ca3fd5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:38:17.495455   21276 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.crt.fb01c024 -> C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.crt
	I0310 20:38:17.498464   21276 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.key.fb01c024 -> C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.key
	I0310 20:38:17.501467   21276 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\proxy-client.key
	I0310 20:38:17.501467   21276 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\proxy-client.crt with IP's: []
	I0310 20:38:18.078573   21276 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\proxy-client.crt ...
	I0310 20:38:18.078573   21276 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\proxy-client.crt: {Name:mke52110b06828271b8a3114528fddea0db1ff43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:38:18.093943   21276 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\proxy-client.key ...
	I0310 20:38:18.093943   21276 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\proxy-client.key: {Name:mkd071dcacce24dd76894ca9accad43918d4a7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:38:18.108514   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0310 20:38:18.109059   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0310 20:38:18.109316   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0310 20:38:18.109595   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0310 20:38:18.110292   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0310 20:38:18.110736   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0310 20:38:18.111136   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0310 20:38:18.111433   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0310 20:38:18.112625   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 20:38:18.113542   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.113805   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 20:38:18.114239   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.114503   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 20:38:18.114966   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.115427   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 20:38:18.115934   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.116191   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 20:38:18.117078   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.117404   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 20:38:18.117782   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.118069   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 20:38:18.118994   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.118994   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 20:38:18.119461   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.119976   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 20:38:18.120661   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.120925   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 20:38:18.121568   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.121865   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 20:38:18.122377   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.122628   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 20:38:18.122824   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.122824   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 20:38:18.123785   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.124183   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 20:38:18.124435   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.124435   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 20:38:18.125065   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.125367   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 20:38:18.125367   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.125367   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 20:38:18.126225   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.126432   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 20:38:18.126432   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.127056   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 20:38:18.127356   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.127356   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 20:38:18.127356   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.127356   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 20:38:18.128249   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.128249   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 20:38:18.128249   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.128249   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 20:38:18.129248   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.129248   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 20:38:18.129248   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.129882   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 20:38:18.130242   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.130242   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 20:38:18.130242   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.130242   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 20:38:18.131080   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.131080   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 20:38:18.131080   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.131080   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 20:38:18.132080   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.132080   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 20:38:18.132080   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.132080   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 20:38:18.132080   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.132080   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 20:38:18.133083   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.133083   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 20:38:18.133083   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.133083   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 20:38:18.133083   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.134080   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 20:38:18.134080   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.134080   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 20:38:18.134080   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.134080   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 20:38:18.135083   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.135083   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 20:38:18.135083   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.135083   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 20:38:18.135083   21276 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 20:38:18.136083   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 20:38:18.136083   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 20:38:18.136083   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 20:38:18.136083   21276 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 20:38:18.137083   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7452.pem -> /usr/share/ca-certificates/7452.pem
	I0310 20:38:18.137083   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7024.pem -> /usr/share/ca-certificates/7024.pem
	I0310 20:38:18.137083   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9520.pem -> /usr/share/ca-certificates/9520.pem
	I0310 20:38:18.137083   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1984.pem -> /usr/share/ca-certificates/1984.pem
	I0310 20:38:18.137083   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8748.pem -> /usr/share/ca-certificates/8748.pem
	I0310 20:38:18.138082   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\352.pem -> /usr/share/ca-certificates/352.pem
	I0310 20:38:18.138082   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6856.pem -> /usr/share/ca-certificates/6856.pem
	I0310 20:38:18.138082   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\232.pem -> /usr/share/ca-certificates/232.pem
	I0310 20:38:18.138082   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9088.pem -> /usr/share/ca-certificates/9088.pem
	I0310 20:38:18.138082   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1140.pem -> /usr/share/ca-certificates/1140.pem
	I0310 20:38:18.139080   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5396.pem -> /usr/share/ca-certificates/5396.pem
	I0310 20:38:18.139337   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1156.pem -> /usr/share/ca-certificates/1156.pem
	I0310 20:38:18.139595   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4588.pem -> /usr/share/ca-certificates/4588.pem
	I0310 20:38:18.140415   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\2512.pem -> /usr/share/ca-certificates/2512.pem
	I0310 20:38:18.140646   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:38:18.140877   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4452.pem -> /usr/share/ca-certificates/4452.pem
	I0310 20:38:18.141240   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6496.pem -> /usr/share/ca-certificates/6496.pem
	I0310 20:38:18.141464   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4944.pem -> /usr/share/ca-certificates/4944.pem
	I0310 20:38:18.141658   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1476.pem -> /usr/share/ca-certificates/1476.pem
	I0310 20:38:18.141931   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6492.pem -> /usr/share/ca-certificates/6492.pem
	I0310 20:38:18.142215   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5040.pem -> /usr/share/ca-certificates/5040.pem
	I0310 20:38:18.142388   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3056.pem -> /usr/share/ca-certificates/3056.pem
	I0310 20:38:18.142595   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5372.pem -> /usr/share/ca-certificates/5372.pem
	I0310 20:38:18.142818   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7160.pem -> /usr/share/ca-certificates/7160.pem
	I0310 20:38:18.143064   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\800.pem -> /usr/share/ca-certificates/800.pem
	I0310 20:38:18.143300   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\12056.pem -> /usr/share/ca-certificates/12056.pem
	I0310 20:38:18.143529   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4052.pem -> /usr/share/ca-certificates/4052.pem
	I0310 20:38:18.143735   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6368.pem -> /usr/share/ca-certificates/6368.pem
	I0310 20:38:18.143923   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\10992.pem -> /usr/share/ca-certificates/10992.pem
	I0310 20:38:18.144189   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5700.pem -> /usr/share/ca-certificates/5700.pem
	I0310 20:38:18.144419   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5736.pem -> /usr/share/ca-certificates/5736.pem
	I0310 20:38:18.144641   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1728.pem -> /usr/share/ca-certificates/1728.pem
	I0310 20:38:18.144641   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7440.pem -> /usr/share/ca-certificates/7440.pem
	I0310 20:38:18.145099   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3920.pem -> /usr/share/ca-certificates/3920.pem
	I0310 20:38:18.145249   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6692.pem -> /usr/share/ca-certificates/6692.pem
	I0310 20:38:18.145503   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8464.pem -> /usr/share/ca-certificates/8464.pem
	I0310 20:38:18.145795   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5172.pem -> /usr/share/ca-certificates/5172.pem
	I0310 20:38:18.146075   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6552.pem -> /usr/share/ca-certificates/6552.pem
	I0310 20:38:18.146397   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7432.pem -> /usr/share/ca-certificates/7432.pem
	I0310 20:38:18.146604   21276 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3516.pem -> /usr/share/ca-certificates/3516.pem
	I0310 20:38:18.151309   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 20:38:18.475185   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 20:38:18.678247   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 20:38:18.965080   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\force-systemd-flag-20210310203447-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 20:38:19.322527   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 20:38:19.607344   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 20:38:19.795150   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 20:38:20.053460   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 20:38:20.422812   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 20:38:20.865178   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 20:38:21.214842   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 20:38:21.450602   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 20:38:21.662396   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 20:38:21.891563   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 20:38:22.080899   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 20:38:22.363351   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 20:38:22.517436   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 20:38:22.761967   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 20:38:22.977533   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 20:38:23.204370   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 20:38:23.397919   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 20:38:23.647906   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 20:38:23.923520   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 20:38:24.143360   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 20:38:24.411825   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 20:38:24.660576   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 20:38:24.910980   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 20:38:25.198494   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 20:38:25.366730   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 20:38:25.946851   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 20:38:26.365844   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 20:38:26.601803   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 20:38:26.861580   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 20:38:27.238585   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 20:38:27.491314   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 20:38:27.797777   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 20:38:28.109101   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 20:38:28.334431   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 20:38:28.592226   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 20:38:28.869282   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 20:38:29.089384   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 20:38:29.476230   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 20:38:29.842288   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 20:38:30.305358   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 20:38:30.667413   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 20:38:30.877235   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 20:38:31.120677   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 20:38:31.403706   21276 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 20:38:31.715465   21276 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 20:38:31.920707   21276 ssh_runner.go:149] Run: openssl version
	I0310 20:38:31.977036   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 20:38:32.075347   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 20:38:32.118932   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 20:38:32.129741   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 20:38:32.184928   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:32.287077   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 20:38:32.384539   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 20:38:32.435626   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 20:38:32.451821   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 20:38:32.577296   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:32.646479   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 20:38:32.741176   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 20:38:32.778943   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 20:38:32.793433   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 20:38:32.849359   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:33.015071   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 20:38:33.134659   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 20:38:33.178811   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 20:38:33.191453   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 20:38:33.286402   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:33.373791   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 20:38:33.449317   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 20:38:33.496683   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 20:38:33.510155   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 20:38:33.587290   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:33.657197   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 20:38:33.791694   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 20:38:33.844121   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 20:38:33.858507   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 20:38:33.912823   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:34.153005   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 20:38:34.329877   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 20:38:34.367445   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 20:38:34.380757   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 20:38:34.442635   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:34.611545   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 20:38:34.706044   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:38:34.749590   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:38:34.769746   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:38:34.883458   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 20:38:34.950107   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 20:38:35.032640   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 20:38:35.063362   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 20:38:35.073712   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 20:38:35.119581   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:35.183930   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 20:38:35.263013   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 20:38:35.290044   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 20:38:35.303412   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 20:38:35.438645   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:35.601069   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 20:38:35.700652   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 20:38:35.735716   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 20:38:35.746439   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 20:38:35.793501   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:35.876192   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 20:38:35.991589   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 20:38:36.037063   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 20:38:36.053575   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 20:38:36.108464   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:36.154037   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 20:38:36.227004   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 20:38:36.247919   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 20:38:36.259906   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 20:38:36.302919   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:36.399398   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 20:38:36.478099   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 20:38:36.503751   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 20:38:36.511993   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 20:38:36.553931   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:36.641701   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 20:38:36.739661   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 20:38:36.781581   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 20:38:36.791481   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 20:38:36.871645   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:36.930101   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 20:38:37.016408   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 20:38:37.075397   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 20:38:37.090775   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 20:38:37.139836   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:37.295535   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 20:38:37.380901   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 20:38:37.411549   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 20:38:37.423478   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 20:38:37.510144   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:37.689832   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 20:38:37.803347   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 20:38:37.854553   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 20:38:37.869196   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 20:38:37.927399   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:38.078438   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 20:38:38.224838   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 20:38:38.255450   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 20:38:38.265394   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 20:38:38.318528   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:38.445151   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 20:38:38.542264   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 20:38:38.571437   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 20:38:38.581571   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 20:38:38.624797   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:38.707857   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 20:38:38.794255   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 20:38:38.879387   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 20:38:38.892207   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 20:38:38.944985   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:39.014612   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 20:38:39.117120   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 20:38:39.167948   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 20:38:39.186536   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 20:38:39.234830   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:39.297628   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 20:38:39.470503   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 20:38:39.518873   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 20:38:39.530724   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 20:38:39.591183   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:39.644568   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 20:38:39.709994   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 20:38:39.781781   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 20:38:39.795235   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 20:38:39.844108   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:39.935275   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 20:38:39.990785   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 20:38:40.060311   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 20:38:40.071306   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 20:38:40.151079   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:40.248105   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 20:38:40.391078   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 20:38:40.444261   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 20:38:40.456498   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 20:38:40.651914   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:40.742272   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 20:38:40.826183   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 20:38:40.854666   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 20:38:40.873504   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 20:38:40.935156   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:41.098965   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 20:38:41.188996   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 20:38:41.216909   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 20:38:41.227348   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 20:38:41.301944   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:41.359453   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 20:38:41.446325   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 20:38:41.467802   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 20:38:41.490669   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 20:38:41.538081   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:41.611965   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 20:38:41.697192   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 20:38:41.727808   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 20:38:41.743328   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 20:38:41.784060   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:41.872162   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 20:38:41.970745   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 20:38:42.019120   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 20:38:42.034256   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 20:38:42.099032   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:42.189621   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 20:38:42.253016   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 20:38:42.302932   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 20:38:42.312412   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 20:38:42.375218   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:42.465794   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 20:38:42.526530   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 20:38:42.579667   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 20:38:42.593193   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 20:38:42.683463   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:42.745328   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 20:38:42.831277   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 20:38:42.863351   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 20:38:42.874067   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 20:38:42.912673   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:42.983366   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 20:38:43.070051   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 20:38:43.119297   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 20:38:43.127803   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 20:38:43.176918   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:43.264055   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 20:38:43.406782   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 20:38:43.448439   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 20:38:43.464216   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 20:38:43.525795   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:43.569623   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 20:38:43.668467   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 20:38:43.699649   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 20:38:43.709808   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 20:38:43.758733   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:43.823843   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 20:38:43.958511   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 20:38:43.990953   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 20:38:44.009722   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 20:38:44.050739   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:44.108980   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 20:38:44.180112   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 20:38:44.206786   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 20:38:44.226355   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 20:38:44.262695   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:44.357000   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 20:38:44.423884   21276 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 20:38:44.451832   21276 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 20:38:44.462413   21276 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 20:38:44.508700   21276 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 20:38:44.586365   21276 kubeadm.go:385] StartCluster: {Name:force-systemd-flag-20210310203447-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:force-systemd-flag-20210310203447-6496 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.4 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:38:44.602601   21276 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:38:45.040281   21276 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 20:38:45.159101   21276 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:38:45.270391   21276 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:38:45.282605   21276 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:38:45.434778   21276 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:38:45.434778   21276 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 20:38:57.778097   21276 out.go:150]   - Generating certificates and keys ...
	I0310 20:39:20.015769   21276 out.go:150]   - Booting up control plane ...
	W0310 20:43:28.283488   21276 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-20210310203447-6496 localhost] and IPs [172.17.0.4 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-20210310203447-6496 localhost] and IPs [172.17.0.4 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-20210310203447-6496 localhost] and IPs [172.17.0.4 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-20210310203447-6496 localhost] and IPs [172.17.0.4 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0310 20:43:28.286013   21276 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0310 20:45:10.988019   21276 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m42.7021393s)
	I0310 20:45:11.013289   21276 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0310 20:45:11.130763   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:45:11.813937   21276 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:45:11.837929   21276 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:45:11.953618   21276 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:45:11.953618   21276 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 20:49:45.087379   21276 out.go:150]   - Generating certificates and keys ...
	I0310 20:49:45.097382   21276 out.go:150]   - Booting up control plane ...
	I0310 20:49:45.099203   21276 kubeadm.go:387] StartCluster complete in 11m0.5139164s
	I0310 20:49:45.106190   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 20:49:54.265969   21276 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (9.1597903s)
	I0310 20:49:54.265969   21276 logs.go:255] 1 containers: [74a7f1091b66]
	I0310 20:49:54.271398   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 20:50:04.939146   21276 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (10.6675121s)
	I0310 20:50:04.939433   21276 logs.go:255] 1 containers: [c80148ab5934]
	I0310 20:50:04.951481   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 20:50:15.030731   21276 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (10.0792631s)
	I0310 20:50:15.030960   21276 logs.go:255] 0 containers: []
	W0310 20:50:15.030960   21276 logs.go:257] No container was found matching "coredns"
	I0310 20:50:15.041911   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 20:50:27.810565   21276 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (12.7686699s)
	I0310 20:50:27.810937   21276 logs.go:255] 1 containers: [06b1fbc15f0b]
	I0310 20:50:27.820315   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 20:50:33.351226   21276 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (5.5309177s)
	I0310 20:50:33.351539   21276 logs.go:255] 0 containers: []
	W0310 20:50:33.351539   21276 logs.go:257] No container was found matching "kube-proxy"
	I0310 20:50:33.368163   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 20:50:43.020890   21276 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (9.6527386s)
	I0310 20:50:43.020890   21276 logs.go:255] 0 containers: []
	W0310 20:50:43.020890   21276 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 20:50:43.029422   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 20:50:50.549377   21276 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (7.5146535s)
	I0310 20:50:50.549377   21276 logs.go:255] 0 containers: []
	W0310 20:50:50.549377   21276 logs.go:257] No container was found matching "storage-provisioner"
	I0310 20:50:50.566257   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 20:51:04.661962   21276 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (14.0954782s)
	I0310 20:51:04.661962   21276 logs.go:255] 1 containers: [72ab123525fa]
	I0310 20:51:04.662399   21276 logs.go:122] Gathering logs for etcd [c80148ab5934] ...
	I0310 20:51:04.662579   21276 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 c80148ab5934"
	I0310 20:51:17.455768   21276 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 c80148ab5934": (12.7932053s)
	I0310 20:51:17.482450   21276 logs.go:122] Gathering logs for kube-controller-manager [72ab123525fa] ...
	I0310 20:51:17.482450   21276 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 72ab123525fa"
	I0310 20:51:23.509607   21276 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 72ab123525fa": (6.0271646s)
	I0310 20:51:23.512095   21276 logs.go:122] Gathering logs for kube-apiserver [74a7f1091b66] ...
	I0310 20:51:23.512386   21276 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 74a7f1091b66"
	I0310 20:51:36.209654   21276 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 74a7f1091b66": (12.6972854s)
	I0310 20:51:36.245040   21276 logs.go:122] Gathering logs for dmesg ...
	I0310 20:51:36.245040   21276 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 20:51:40.271545   21276 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (4.0265101s)
	I0310 20:51:40.274822   21276 logs.go:122] Gathering logs for describe nodes ...
	I0310 20:51:40.274822   21276 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 20:52:23.575484   21276 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (43.3004567s)
	I0310 20:52:23.578225   21276 logs.go:122] Gathering logs for kube-scheduler [06b1fbc15f0b] ...
	I0310 20:52:23.578376   21276 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 06b1fbc15f0b"
	I0310 20:52:34.017628   21276 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 06b1fbc15f0b": (10.4392647s)
	I0310 20:52:34.040576   21276 logs.go:122] Gathering logs for Docker ...
	I0310 20:52:34.040576   21276 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 20:52:38.043492   21276 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (4.0029214s)
	I0310 20:52:38.056033   21276 logs.go:122] Gathering logs for container status ...
	I0310 20:52:38.056283   21276 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 20:52:57.309348   21276 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (19.2528124s)
	I0310 20:52:57.312128   21276 logs.go:122] Gathering logs for kubelet ...
	I0310 20:52:57.312128   21276 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 20:53:01.671142   21276 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (4.3590196s)
	W0310 20:53:01.757982   21276 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0310 20:53:01.758981   21276 out.go:191] * 
	* 
	W0310 20:53:01.758981   21276 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 20:53:01.759985   21276 out.go:191] * 
	* 
	W0310 20:53:01.759985   21276 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue:
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	W0310 20:53:01.759985   21276 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 20:53:01.767040   21276 out.go:129] 
	W0310 20:53:01.767040   21276 out.go:191] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 20:53:01.767993   21276 out.go:191] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0310 20:53:01.767993   21276 out.go:191] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0310 20:53:01.770981   21276 out.go:129] 

                                                
                                                
** /stderr **
docker_test.go:85: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-20210310203447-6496 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 109
docker_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20210310203447-6496 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20210310203447-6496 ssh "docker info --format {{.CgroupDriver}}": (41.559905s)
docker_test.go:96: *** TestForceSystemdFlag FAILED at 2021-03-10 20:53:44.4769204 +0000 GMT m=+6564.179678401
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect force-systemd-flag-20210310203447-6496
helpers_test.go:231: (dbg) docker inspect force-systemd-flag-20210310203447-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1969d32f92f234263c9fd6a6ec8c282f304d7fce3c374c9c1cfc0bd45b04642",
	        "Created": "2021-03-10T20:35:06.5649423Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:35:34.7845562Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/f1969d32f92f234263c9fd6a6ec8c282f304d7fce3c374c9c1cfc0bd45b04642/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1969d32f92f234263c9fd6a6ec8c282f304d7fce3c374c9c1cfc0bd45b04642/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1969d32f92f234263c9fd6a6ec8c282f304d7fce3c374c9c1cfc0bd45b04642/hosts",
	        "LogPath": "/var/lib/docker/containers/f1969d32f92f234263c9fd6a6ec8c282f304d7fce3c374c9c1cfc0bd45b04642/f1969d32f92f234263c9fd6a6ec8c282f304d7fce3c374c9c1cfc0bd45b04642-json.log",
	        "Name": "/force-systemd-flag-20210310203447-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "force-systemd-flag-20210310203447-6496:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 1887436800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 1887436800,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4bd4b216ad74107a220e67302f26799aaffe7c0420a97656e5534c12049f30cb-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4bd4b216ad74107a220e67302f26799aaffe7c0420a97656e5534c12049f30cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4bd4b216ad74107a220e67302f26799aaffe7c0420a97656e5534c12049f30cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4bd4b216ad74107a220e67302f26799aaffe7c0420a97656e5534c12049f30cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-20210310203447-6496",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-20210310203447-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-20210310203447-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-20210310203447-6496",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-20210310203447-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "865662c581ce540261e26fc078b209d2cbe6369f9fed56317f402db7129b2d56",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55127"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55124"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55125"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/865662c581ce",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "814e9c5d06704a1b28607d551634c32ccbe7a253dcc957d9f34532850ac2956b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "814e9c5d06704a1b28607d551634c32ccbe7a253dcc957d9f34532850ac2956b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20210310203447-6496 -n force-systemd-flag-20210310203447-6496
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-20210310203447-6496 -n force-systemd-flag-20210310203447-6496: exit status 4 (21.5652956s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:53:57.008281   18564 status.go:396] kubeconfig endpoint: extract IP: "force-systemd-flag-20210310203447-6496" does not appear in C:\Users\jenkins/.kube/config

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 4 (may be ok)
helpers_test.go:237: "force-systemd-flag-20210310203447-6496" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:171: Cleaning up "force-systemd-flag-20210310203447-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20210310203447-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20210310203447-6496: (30.2273915s)
--- FAIL: TestForceSystemdFlag (1189.23s)

                                                
                                    
x
+
TestForceSystemdEnv (2020.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20210310201637-6496 --memory=1800 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:131: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-20210310201637-6496 --memory=1800 --alsologtostderr -v=5 --driver=docker: exit status 1 (30m0.047009s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	
	
	* Starting control plane node force-systemd-env-20210310201637-6496 in cluster force-systemd-env-20210310201637-6496
	* Creating docker container (CPUs=2, Memory=1800MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:16:37.995664    6776 out.go:239] Setting OutFile to fd 2696 ...
	I0310 20:16:37.997706    6776 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:16:37.997706    6776 out.go:252] Setting ErrFile to fd 2936...
	I0310 20:16:37.997706    6776 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:16:38.015659    6776 out.go:246] Setting JSON to false
	I0310 20:16:38.018729    6776 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":32863,"bootTime":1615374535,"procs":112,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:16:38.018729    6776 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:16:38.024690    6776 out.go:129] * [force-systemd-env-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:16:38.027675    6776 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:16:38.032777    6776 out.go:129]   - MINIKUBE_FORCE_SYSTEMD=true
	I0310 20:16:38.040635    6776 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:16:38.706735    6776 docker.go:119] docker version: linux-20.10.2
	I0310 20:16:38.714388    6776 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:16:39.885891    6776 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.171507s)
	I0310 20:16:39.887840    6776 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:46 SystemTime:2021-03-10 20:16:39.4105083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:39.896719    6776 out.go:129] * Using the docker driver based on user configuration
	I0310 20:16:39.897255    6776 start.go:276] selected driver: docker
	I0310 20:16:39.897448    6776 start.go:718] validating driver "docker" against <nil>
	I0310 20:16:39.897577    6776 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:16:41.016262    6776 out.go:129] 
	W0310 20:16:41.016262    6776 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	W0310 20:16:41.017247    6776 out.go:191] * Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	* Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0310 20:16:41.017247    6776 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* Documentation: https://docs.docker.com/docker-for-windows/#resources
	I0310 20:16:41.020287    6776 out.go:129] 
	I0310 20:16:41.035267    6776 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:16:42.133699    6776 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0984362s)
	I0310 20:16:42.134774    6776 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:46 SystemTime:2021-03-10 20:16:41.6025122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:42.135444    6776 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 20:16:42.136365    6776 start_flags.go:699] Wait components to verify : map[apiserver:true system_pods:true]
	I0310 20:16:42.136770    6776 cni.go:74] Creating CNI manager for ""
	I0310 20:16:42.136770    6776 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:16:42.136770    6776 start_flags.go:398] config:
	{Name:force-systemd-env-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:force-systemd-env-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:16:42.141159    6776 out.go:129] * Starting control plane node force-systemd-env-20210310201637-6496 in cluster force-systemd-env-20210310201637-6496
	I0310 20:16:43.011972    6776 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:16:43.011972    6776 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:16:43.012343    6776 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:16:43.012343    6776 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:16:43.013107    6776 cache.go:54] Caching tarball of preloaded images
	I0310 20:16:43.013463    6776 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 20:16:43.013769    6776 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 20:16:43.015434    6776 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\config.json ...
	I0310 20:16:43.015729    6776 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\config.json: {Name:mkd749b9c495c64f5216afa3ac04a05dc40d21c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:16:43.038952    6776 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:16:43.041322    6776 start.go:313] acquiring machines lock for force-systemd-env-20210310201637-6496: {Name:mked11e22c26e51afe0f24bd42321d50da99b8c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:16:43.041719    6776 start.go:317] acquired machines lock for "force-systemd-env-20210310201637-6496" in 396.7??s
	I0310 20:16:43.042111    6776 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:force-systemd-env-20210310201637-6496 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 20:16:43.042111    6776 start.go:126] createHost starting for "" (driver="docker")
	I0310 20:16:43.046549    6776 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	I0310 20:16:43.048506    6776 start.go:160] libmachine.API.Create for "force-systemd-env-20210310201637-6496" (driver="docker")
	I0310 20:16:43.048506    6776 client.go:168] LocalClient.Create starting
	I0310 20:16:43.048961    6776 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 20:16:43.048961    6776 main.go:121] libmachine: Decoding PEM data...
	I0310 20:16:43.048961    6776 main.go:121] libmachine: Parsing certificate...
	I0310 20:16:43.048961    6776 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 20:16:43.048961    6776 main.go:121] libmachine: Decoding PEM data...
	I0310 20:16:43.048961    6776 main.go:121] libmachine: Parsing certificate...
	I0310 20:16:43.085872    6776 cli_runner.go:115] Run: docker network inspect force-systemd-env-20210310201637-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 20:16:43.930234    6776 cli_runner.go:162] docker network inspect force-systemd-env-20210310201637-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 20:16:43.947775    6776 network_create.go:240] running [docker network inspect force-systemd-env-20210310201637-6496] to gather additional debugging logs...
	I0310 20:16:43.947775    6776 cli_runner.go:115] Run: docker network inspect force-systemd-env-20210310201637-6496
	W0310 20:16:44.778875    6776 cli_runner.go:162] docker network inspect force-systemd-env-20210310201637-6496 returned with exit code 1
	I0310 20:16:44.779402    6776 network_create.go:243] error running [docker network inspect force-systemd-env-20210310201637-6496]: docker network inspect force-systemd-env-20210310201637-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20210310201637-6496
	I0310 20:16:44.779402    6776 network_create.go:245] output of [docker network inspect force-systemd-env-20210310201637-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20210310201637-6496
	
	** /stderr **
	I0310 20:16:44.799390    6776 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:16:45.733678    6776 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 20:16:45.734194    6776 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: force-systemd-env-20210310201637-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 20:16:45.756009    6776 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20210310201637-6496
	I0310 20:16:47.682428    6776 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20210310201637-6496: (1.9261545s)
	I0310 20:16:47.683222    6776 kic.go:102] calculated static IP "192.168.49.97" for the "force-systemd-env-20210310201637-6496" container
	I0310 20:16:47.726722    6776 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 20:16:48.534532    6776 cli_runner.go:115] Run: docker volume create force-systemd-env-20210310201637-6496 --label name.minikube.sigs.k8s.io=force-systemd-env-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 20:16:49.238877    6776 oci.go:102] Successfully created a docker volume force-systemd-env-20210310201637-6496
	I0310 20:16:49.254147    6776 cli_runner.go:115] Run: docker run --rm --name force-systemd-env-20210310201637-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20210310201637-6496 --entrypoint /usr/bin/test -v force-systemd-env-20210310201637-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 20:16:53.824888    6776 cli_runner.go:168] Completed: docker run --rm --name force-systemd-env-20210310201637-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20210310201637-6496 --entrypoint /usr/bin/test -v force-systemd-env-20210310201637-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (4.5707555s)
	I0310 20:16:53.825361    6776 oci.go:106] Successfully prepared a docker volume force-systemd-env-20210310201637-6496
	I0310 20:16:53.825361    6776 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:16:53.825838    6776 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:16:53.826011    6776 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 20:16:53.838680    6776 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:16:53.841637    6776 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	W0310 20:16:54.704514    6776 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 20:16:54.704514    6776 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 20:16:55.105761    6776 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.267085s)
	I0310 20:16:55.107854    6776 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:49 SystemTime:2021-03-10 20:16:54.5495867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:55.137198    6776 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 20:16:56.234015    6776 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.0968204s)
	I0310 20:16:56.250335    6776 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20210310201637-6496 --name force-systemd-env-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20210310201637-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20210310201637-6496 --network force-systemd-env-20210310201637-6496 --ip 192.168.49.97 --volume force-systemd-env-20210310201637-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 20:16:59.843068    6776 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20210310201637-6496 --name force-systemd-env-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20210310201637-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20210310201637-6496 --network force-systemd-env-20210310201637-6496 --ip 192.168.49.97 --volume force-systemd-env-20210310201637-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (3.5927451s)
	I0310 20:16:59.853033    6776 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210310201637-6496 --format={{.State.Running}}
	I0310 20:17:00.444946    6776 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:01.062688    6776 cli_runner.go:115] Run: docker exec force-systemd-env-20210310201637-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 20:17:02.677202    6776 cli_runner.go:168] Completed: docker exec force-systemd-env-20210310201637-6496 stat /var/lib/dpkg/alternatives/iptables: (1.6145187s)
	I0310 20:17:02.677528    6776 oci.go:278] the created container "force-systemd-env-20210310201637-6496" has a running status.
	I0310 20:17:02.677528    6776 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa...
	I0310 20:17:03.079953    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0310 20:17:03.099968    6776 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 20:17:05.655747    6776 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:06.279204    6776 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 20:17:06.279204    6776 kic_runner.go:115] Args: [docker exec --privileged force-systemd-env-20210310201637-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 20:17:08.252078    6776 kic_runner.go:124] Done: [docker exec --privileged force-systemd-env-20210310201637-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.9668895s)
	I0310 20:17:08.255802    6776 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa...
	I0310 20:17:09.155525    6776 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:09.829498    6776 machine.go:88] provisioning docker machine ...
	I0310 20:17:09.829498    6776 ubuntu.go:169] provisioning hostname "force-systemd-env-20210310201637-6496"
	I0310 20:17:09.846884    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:10.569258    6776 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:10.588907    6776 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55096 <nil> <nil>}
	I0310 20:17:10.588907    6776 main.go:121] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-20210310201637-6496 && echo "force-systemd-env-20210310201637-6496" | sudo tee /etc/hostname
	I0310 20:17:13.004490    6776 main.go:121] libmachine: SSH cmd err, output: <nil>: force-systemd-env-20210310201637-6496
	
	I0310 20:17:13.021217    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:13.679291    6776 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:13.679291    6776 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55096 <nil> <nil>}
	I0310 20:17:13.679291    6776 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-20210310201637-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-20210310201637-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-20210310201637-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:17:14.934005    6776 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:17:14.934135    6776 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:17:14.934135    6776 ubuntu.go:177] setting up certificates
	I0310 20:17:14.934135    6776 provision.go:83] configureAuth start
	I0310 20:17:14.949307    6776 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20210310201637-6496
	I0310 20:17:15.614020    6776 provision.go:137] copyHostCerts
	I0310 20:17:15.614518    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> C:\Users\jenkins\.minikube/ca.pem
	I0310 20:17:15.615698    6776 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:17:15.615698    6776 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:17:15.615698    6776 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:17:15.615698    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\cert.pem -> C:\Users\jenkins\.minikube/cert.pem
	I0310 20:17:15.615698    6776 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:17:15.615698    6776 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:17:15.615698    6776 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:17:15.615698    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\key.pem -> C:\Users\jenkins\.minikube/key.pem
	I0310 20:17:15.615698    6776 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:17:15.615698    6776 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:17:15.624829    6776 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:17:15.627793    6776 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.force-systemd-env-20210310201637-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube force-systemd-env-20210310201637-6496]
	I0310 20:17:16.823202    6776 provision.go:165] copyRemoteCerts
	I0310 20:17:16.834208    6776 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:17:16.842899    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:17.506166    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:18.157789    6776 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3235853s)
	I0310 20:17:18.157789    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0310 20:17:18.158483    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:17:18.592997    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0310 20:17:18.594574    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0310 20:17:18.978640    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0310 20:17:18.985107    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0310 20:17:19.359136    6776 provision.go:86] duration metric: configureAuth took 4.425015s
	I0310 20:17:19.359136    6776 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:17:19.378980    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:20.055543    6776 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:20.055925    6776 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55096 <nil> <nil>}
	I0310 20:17:20.056252    6776 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:17:20.864181    6776 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:17:20.864181    6776 ubuntu.go:71] root file system type: overlay
	I0310 20:17:20.865169    6776 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:17:20.873177    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:21.608684    6776 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:21.612581    6776 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55096 <nil> <nil>}
	I0310 20:17:21.612581    6776 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:17:22.584569    6776 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:17:22.592101    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:23.276297    6776 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:23.277905    6776 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55096 <nil> <nil>}
	I0310 20:17:23.277905    6776 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:17:32.238452    6776 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 20:17:22.572110000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 20:17:32.238452    6776 machine.go:91] provisioned docker machine in 22.4090245s
	I0310 20:17:32.238452    6776 client.go:171] LocalClient.Create took 49.1901053s
	I0310 20:17:32.238452    6776 start.go:168] duration metric: libmachine.API.Create for "force-systemd-env-20210310201637-6496" took 49.1901053s
	I0310 20:17:32.238452    6776 start.go:267] post-start starting for "force-systemd-env-20210310201637-6496" (driver="docker")
	I0310 20:17:32.238452    6776 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:17:32.249229    6776 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:17:32.260780    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:32.921486    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:33.374916    6776 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.1253245s)
	I0310 20:17:33.392875    6776 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:17:33.458396    6776 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:17:33.458396    6776 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:17:33.458396    6776 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:17:33.458396    6776 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:17:33.458640    6776 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:17:33.458847    6776 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:17:33.460872    6776 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:17:33.461474    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> /etc/test/nested/copy/2512/hosts
	I0310 20:17:33.462687    6776 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:17:33.462687    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> /etc/test/nested/copy/4452/hosts
	I0310 20:17:33.477322    6776 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:17:33.526800    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:17:33.782011    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:17:33.987401    6776 start.go:270] post-start completed in 1.7489533s
	I0310 20:17:34.021097    6776 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20210310201637-6496
	I0310 20:17:34.916220    6776 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\config.json ...
	I0310 20:17:34.981958    6776 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:17:34.991875    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:35.690264    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:35.998285    6776 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.0157928s)
	I0310 20:17:35.998640    6776 start.go:129] duration metric: createHost completed in 52.9563391s
	I0310 20:17:35.998640    6776 start.go:80] releasing machines lock for "force-systemd-env-20210310201637-6496", held for 52.9570912s
	I0310 20:17:36.008359    6776 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-20210310201637-6496
	I0310 20:17:36.829341    6776 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:17:36.841507    6776 ssh_runner.go:149] Run: systemctl --version
	I0310 20:17:36.848234    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:36.850186    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:37.730509    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:37.746417    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:38.491099    6776 ssh_runner.go:189] Completed: systemctl --version: (1.6495977s)
	I0310 20:17:38.491099    6776 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6607872s)
	I0310 20:17:38.508489    6776 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:17:38.632930    6776 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:17:38.778320    6776 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:17:38.798122    6776 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:17:38.931975    6776 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:17:39.163345    6776 docker.go:329] Forcing docker to use systemd as cgroup manager...
	I0310 20:17:39.163580    6776 ssh_runner.go:316] scp memory --> /etc/docker/daemon.json (143 bytes)
	I0310 20:17:39.382697    6776 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:17:40.574407    6776 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.1917141s)
	I0310 20:17:40.585409    6776 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:17:45.481381    6776 ssh_runner.go:189] Completed: sudo systemctl restart docker: (4.8959867s)
	I0310 20:17:45.501632    6776 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:17:46.340021    6776 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 20:17:46.358181    6776 cli_runner.go:115] Run: docker exec -t force-systemd-env-20210310201637-6496 dig +short host.docker.internal
	I0310 20:17:47.506315    6776 cli_runner.go:168] Completed: docker exec -t force-systemd-env-20210310201637-6496 dig +short host.docker.internal: (1.148137s)
	I0310 20:17:47.506315    6776 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:17:47.516994    6776 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:17:47.596205    6776 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:17:47.761882    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:17:48.374879    6776 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\client.crt
	I0310 20:17:48.385295    6776 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\client.key
	I0310 20:17:48.389391    6776 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:17:48.389629    6776 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:17:48.396088    6776 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:17:48.745310    6776 docker.go:423] Got preloaded images: 
	I0310 20:17:48.745310    6776 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 20:17:48.761281    6776 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:17:48.896889    6776 ssh_runner.go:149] Run: which lz4
	I0310 20:17:48.987092    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0310 20:17:48.989214    6776 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 20:17:49.091238    6776 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 20:17:49.091238    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 20:24:09.022462    6776 docker.go:388] Took 380.036171 seconds to copy over tarball
	I0310 20:24:09.036596    6776 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 20:24:49.341210    6776 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (40.3048632s)
	I0310 20:24:49.341210    6776 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 20:24:50.726950    6776 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:24:50.780897    6776 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 20:24:50.872779    6776 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:24:51.285137    6776 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:24:58.000100    6776 ssh_runner.go:189] Completed: sudo systemctl restart docker: (6.7150046s)
	I0310 20:24:58.018587    6776 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:24:58.745977    6776 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 20:24:58.746576    6776 cache_images.go:73] Images are preloaded, skipping loading
	I0310 20:24:58.754587    6776 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 20:24:59.988047    6776 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (1.233468s)
	I0310 20:24:59.988663    6776 cni.go:74] Creating CNI manager for ""
	I0310 20:24:59.988663    6776 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:24:59.988814    6776 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 20:24:59.988814    6776 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.97 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-20210310201637-6496 NodeName:force-systemd-env-20210310201637-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.97 CgroupDriver:systemd
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 20:24:59.989556    6776 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "force-systemd-env-20210310201637-6496"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 20:24:59.990197    6776 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=force-systemd-env-20210310201637-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:force-systemd-env-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 20:24:59.999908    6776 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 20:25:00.120503    6776 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 20:25:00.129316    6776 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 20:25:00.297385    6776 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I0310 20:25:00.444918    6776 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 20:25:00.605970    6776 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1871 bytes)
	I0310 20:25:00.791168    6776 ssh_runner.go:149] Run: grep 192.168.49.97	control-plane.minikube.internal$ /etc/hosts
	I0310 20:25:00.830661    6776 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:25:00.930774    6776 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496 for IP: 192.168.49.97
	I0310 20:25:00.931606    6776 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 20:25:00.931606    6776 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 20:25:00.932671    6776 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\client.key
	I0310 20:25:00.932937    6776 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.key.b6188fac
	I0310 20:25:00.933143    6776 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.crt.b6188fac with IP's: [192.168.49.97 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 20:25:01.118361    6776 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.crt.b6188fac ...
	I0310 20:25:01.118361    6776 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.crt.b6188fac: {Name:mk42f5be367e350ecd32a49f9cb4b2b3109c18d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:01.138376    6776 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.key.b6188fac ...
	I0310 20:25:01.138376    6776 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.key.b6188fac: {Name:mk5766673ddfa8a1515ac1441c3f8ef179a762ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:01.160054    6776 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.crt.b6188fac -> C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.crt
	I0310 20:25:01.164253    6776 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.key.b6188fac -> C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.key
	I0310 20:25:01.168380    6776 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\proxy-client.key
	I0310 20:25:01.168635    6776 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\proxy-client.crt with IP's: []
	I0310 20:25:01.476180    6776 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\proxy-client.crt ...
	I0310 20:25:01.476338    6776 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\proxy-client.crt: {Name:mke771183c379dcfb02b91b6b49899afb7525a0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:01.491228    6776 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\proxy-client.key ...
	I0310 20:25:01.491228    6776 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\proxy-client.key: {Name:mkd6bb4c3a60ce055f3e43471dece1723d423f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:01.506406    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0310 20:25:01.507251    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0310 20:25:01.507251    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0310 20:25:01.508233    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0310 20:25:01.508233    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0310 20:25:01.508233    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0310 20:25:01.508233    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0310 20:25:01.509285    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0310 20:25:01.509285    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 20:25:01.510315    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.510315    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 20:25:01.510315    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.511310    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 20:25:01.511310    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.511310    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 20:25:01.511310    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.511310    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 20:25:01.512263    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.512263    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 20:25:01.512263    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.512263    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 20:25:01.513268    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.513268    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 20:25:01.513268    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.513268    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 20:25:01.514359    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.514359    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 20:25:01.514359    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.514359    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 20:25:01.514359    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.515286    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 20:25:01.515286    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.515286    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 20:25:01.515286    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.516264    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 20:25:01.516264    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.526035    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 20:25:01.526703    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.526989    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 20:25:01.527923    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.527923    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 20:25:01.527923    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.527923    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 20:25:01.527923    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.528926    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 20:25:01.528926    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.528926    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 20:25:01.528926    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.528926    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 20:25:01.529905    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.529905    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 20:25:01.529905    6776 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:01.529905    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 20:25:01.529905    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 20:25:01.530916    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 20:25:01.530916    6776 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 20:25:01.530916    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8464.pem -> /usr/share/ca-certificates/8464.pem
	I0310 20:25:01.530916    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3516.pem -> /usr/share/ca-certificates/3516.pem
	I0310 20:25:01.531907    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5736.pem -> /usr/share/ca-certificates/5736.pem
	I0310 20:25:01.531907    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8748.pem -> /usr/share/ca-certificates/8748.pem
	I0310 20:25:01.531907    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1728.pem -> /usr/share/ca-certificates/1728.pem
	I0310 20:25:01.531907    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3056.pem -> /usr/share/ca-certificates/3056.pem
	I0310 20:25:01.531907    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:01.531907    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\352.pem -> /usr/share/ca-certificates/352.pem
	I0310 20:25:01.532910    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9520.pem -> /usr/share/ca-certificates/9520.pem
	I0310 20:25:01.532910    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5172.pem -> /usr/share/ca-certificates/5172.pem
	I0310 20:25:01.532910    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7024.pem -> /usr/share/ca-certificates/7024.pem
	I0310 20:25:01.532910    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4944.pem -> /usr/share/ca-certificates/4944.pem
	I0310 20:25:01.532910    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4452.pem -> /usr/share/ca-certificates/4452.pem
	I0310 20:25:01.532910    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\800.pem -> /usr/share/ca-certificates/800.pem
	I0310 20:25:01.533916    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6496.pem -> /usr/share/ca-certificates/6496.pem
	I0310 20:25:01.533916    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4588.pem -> /usr/share/ca-certificates/4588.pem
	I0310 20:25:01.533916    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6692.pem -> /usr/share/ca-certificates/6692.pem
	I0310 20:25:01.533916    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1140.pem -> /usr/share/ca-certificates/1140.pem
	I0310 20:25:01.533916    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5396.pem -> /usr/share/ca-certificates/5396.pem
	I0310 20:25:01.533916    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5372.pem -> /usr/share/ca-certificates/5372.pem
	I0310 20:25:01.534901    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7432.pem -> /usr/share/ca-certificates/7432.pem
	I0310 20:25:01.534901    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5040.pem -> /usr/share/ca-certificates/5040.pem
	I0310 20:25:01.534901    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\10992.pem -> /usr/share/ca-certificates/10992.pem
	I0310 20:25:01.534901    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3920.pem -> /usr/share/ca-certificates/3920.pem
	I0310 20:25:01.534901    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\2512.pem -> /usr/share/ca-certificates/2512.pem
	I0310 20:25:01.534901    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7452.pem -> /usr/share/ca-certificates/7452.pem
	I0310 20:25:01.535936    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9088.pem -> /usr/share/ca-certificates/9088.pem
	I0310 20:25:01.535936    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6368.pem -> /usr/share/ca-certificates/6368.pem
	I0310 20:25:01.535936    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\12056.pem -> /usr/share/ca-certificates/12056.pem
	I0310 20:25:01.535936    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6856.pem -> /usr/share/ca-certificates/6856.pem
	I0310 20:25:01.535936    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7440.pem -> /usr/share/ca-certificates/7440.pem
	I0310 20:25:01.535936    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1984.pem -> /usr/share/ca-certificates/1984.pem
	I0310 20:25:01.535936    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7160.pem -> /usr/share/ca-certificates/7160.pem
	I0310 20:25:01.536919    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\232.pem -> /usr/share/ca-certificates/232.pem
	I0310 20:25:01.536919    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6552.pem -> /usr/share/ca-certificates/6552.pem
	I0310 20:25:01.536919    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4052.pem -> /usr/share/ca-certificates/4052.pem
	I0310 20:25:01.536919    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1476.pem -> /usr/share/ca-certificates/1476.pem
	I0310 20:25:01.536919    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5700.pem -> /usr/share/ca-certificates/5700.pem
	I0310 20:25:01.536919    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6492.pem -> /usr/share/ca-certificates/6492.pem
	I0310 20:25:01.536919    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1156.pem -> /usr/share/ca-certificates/1156.pem
	I0310 20:25:01.539917    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 20:25:01.715408    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 20:25:01.875042    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 20:25:02.110636    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\force-systemd-env-20210310201637-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0310 20:25:02.299238    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 20:25:02.592158    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 20:25:02.838526    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 20:25:03.128872    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 20:25:03.521643    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 20:25:03.702022    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 20:25:03.880966    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 20:25:04.166251    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 20:25:04.439088    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 20:25:04.701906    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 20:25:04.935586    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 20:25:05.124434    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 20:25:05.366163    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 20:25:05.642143    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 20:25:05.863235    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 20:25:06.049820    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 20:25:06.209087    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 20:25:06.480155    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 20:25:06.699608    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 20:25:07.033289    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 20:25:07.224060    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 20:25:07.487803    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 20:25:07.739081    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 20:25:08.040271    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 20:25:08.305607    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 20:25:08.481106    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 20:25:08.777650    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 20:25:09.011680    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 20:25:09.244854    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 20:25:09.506038    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 20:25:09.723672    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 20:25:09.950662    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 20:25:10.174609    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 20:25:10.437689    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 20:25:10.683818    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 20:25:10.979202    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 20:25:11.213087    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 20:25:11.564852    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 20:25:11.953501    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 20:25:12.152503    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 20:25:12.397325    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 20:25:12.785094    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 20:25:13.042313    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 20:25:13.282159    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 20:25:13.603383    6776 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 20:25:13.755141    6776 ssh_runner.go:149] Run: openssl version
	I0310 20:25:13.827623    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 20:25:13.936122    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 20:25:13.957343    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 20:25:13.968600    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 20:25:14.014579    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:14.064753    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 20:25:14.158573    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 20:25:14.176284    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 20:25:14.197766    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 20:25:14.256032    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:14.338906    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 20:25:14.420107    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 20:25:14.448226    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 20:25:14.469563    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 20:25:14.517448    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:14.616453    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 20:25:14.758181    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 20:25:14.785137    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 20:25:14.805259    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 20:25:14.869820    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:14.954181    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 20:25:15.114138    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 20:25:15.146767    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 20:25:15.155524    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 20:25:15.237487    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:15.308424    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 20:25:15.408112    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:15.438123    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:15.445025    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:15.514450    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 20:25:15.618888    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 20:25:15.820638    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 20:25:15.889399    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 20:25:15.898389    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 20:25:15.955506    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:16.071407    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 20:25:16.213304    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 20:25:16.320513    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 20:25:16.342794    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 20:25:16.396230    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:16.514801    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 20:25:16.592428    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 20:25:16.632764    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 20:25:16.641748    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 20:25:16.778772    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:16.868680    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 20:25:16.977605    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 20:25:17.104768    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 20:25:17.119531    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 20:25:17.237149    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:17.356811    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 20:25:17.475834    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 20:25:17.524096    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 20:25:17.550127    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 20:25:17.625607    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:17.878435    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 20:25:18.000926    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 20:25:18.041140    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 20:25:18.068931    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 20:25:18.157525    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:18.329055    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 20:25:18.513155    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 20:25:18.651595    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 20:25:18.662276    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 20:25:18.749451    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:18.908855    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 20:25:19.039062    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 20:25:19.103613    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 20:25:19.137997    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 20:25:19.250318    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:19.375686    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 20:25:19.524196    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 20:25:19.557582    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 20:25:19.571973    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 20:25:19.656172    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:19.788917    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 20:25:19.951413    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 20:25:19.997228    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 20:25:20.032112    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 20:25:20.110688    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:20.411071    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 20:25:20.608391    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 20:25:20.690660    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 20:25:20.707920    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 20:25:20.839886    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:20.973246    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 20:25:21.192604    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 20:25:21.247616    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 20:25:21.316848    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 20:25:21.416799    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:21.553677    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 20:25:21.635738    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 20:25:21.680941    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 20:25:21.692523    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 20:25:21.830436    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:21.954653    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 20:25:22.098329    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 20:25:22.153660    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 20:25:22.170423    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 20:25:22.328905    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:22.459696    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 20:25:22.535435    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 20:25:22.583309    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 20:25:22.596549    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 20:25:22.761505    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:22.877867    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 20:25:23.029204    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 20:25:23.073152    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 20:25:23.096574    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 20:25:23.163942    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:23.374742    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 20:25:23.538375    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 20:25:23.618230    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 20:25:23.622845    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 20:25:23.722574    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:23.840517    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 20:25:23.964595    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 20:25:24.035366    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 20:25:24.070134    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 20:25:24.157111    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:24.307946    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 20:25:24.466832    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 20:25:24.536151    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 20:25:24.545931    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 20:25:24.593514    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:24.736204    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 20:25:24.981732    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 20:25:25.101298    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 20:25:25.113466    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 20:25:25.232089    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:25.403895    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 20:25:25.483667    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 20:25:25.511237    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 20:25:25.525346    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 20:25:25.611195    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:25.770982    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 20:25:25.928987    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 20:25:26.001300    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 20:25:26.021213    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 20:25:26.107083    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:26.309625    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 20:25:26.424428    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 20:25:26.467802    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 20:25:26.488257    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 20:25:26.592500    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:26.807716    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 20:25:26.921159    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 20:25:26.974985    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 20:25:26.977426    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 20:25:27.131728    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:27.253264    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 20:25:27.398189    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 20:25:27.454460    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 20:25:27.473478    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 20:25:27.565909    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:27.655800    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 20:25:27.769404    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 20:25:27.814344    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 20:25:27.828809    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 20:25:27.926420    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:28.059014    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 20:25:28.152913    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 20:25:28.212228    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 20:25:28.220646    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 20:25:28.403736    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:28.503395    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 20:25:28.629906    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 20:25:28.738062    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 20:25:28.750689    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 20:25:28.812806    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:28.947595    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 20:25:29.181355    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 20:25:29.258812    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 20:25:29.270767    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 20:25:29.389078    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:29.526267    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 20:25:29.627853    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 20:25:29.696478    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 20:25:29.722318    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 20:25:29.843184    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:29.969548    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 20:25:30.069220    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 20:25:30.142709    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 20:25:30.154140    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 20:25:30.269045    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:30.460324    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 20:25:30.560046    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 20:25:30.667151    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 20:25:30.683664    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 20:25:30.758440    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:30.946359    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 20:25:31.032539    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 20:25:31.093772    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 20:25:31.106381    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 20:25:31.235394    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:31.419454    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 20:25:31.547919    6776 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 20:25:31.571799    6776 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 20:25:31.578859    6776 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 20:25:31.685094    6776 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:31.786077    6776 kubeadm.go:385] StartCluster: {Name:force-systemd-env-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:force-systemd-env-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:25:31.794970    6776 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:25:32.618197    6776 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 20:25:32.863086    6776 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:25:32.993839    6776 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:25:33.008792    6776 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:25:33.140211    6776 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:25:33.140760    6776 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 20:30:10.410962    6776 out.go:150]   - Generating certificates and keys ...
	I0310 20:30:10.416921    6776 out.go:150]   - Booting up control plane ...
	I0310 20:30:10.423780    6776 out.go:150]   - Configuring RBAC rules ...
	I0310 20:30:10.429290    6776 cni.go:74] Creating CNI manager for ""
	I0310 20:30:10.429290    6776 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:30:10.429290    6776 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0310 20:30:10.440864    6776 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:30:10.442894    6776 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=force-systemd-env-20210310201637-6496 minikube.k8s.io/updated_at=2021_03_10T20_30_10_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 20:30:48.074848    6776 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (37.6452165s)
	I0310 20:30:48.074848    6776 ops.go:34] apiserver oom_adj: -16
	I0310 20:30:48.074848    6776 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (37.6340826s)
	I0310 20:30:48.074848    6776 kubeadm.go:995] duration metric: took 37.6452165s to wait for elevateKubeSystemPrivileges.
	I0310 20:30:48.123711    6776 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=force-systemd-env-20210310201637-6496 minikube.k8s.io/updated_at=2021_03_10T20_30_10_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (37.6804633s)
	I0310 20:30:48.123711    6776 kubeadm.go:387] StartCluster complete in 5m16.3388181s
	I0310 20:30:48.123711    6776 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:30:48.123711    6776 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	I0310 20:30:48.123711    6776 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:30:48.154970    6776 kapi.go:59] client config for force-systemd-env-20210310201637-6496: &rest.Config{Host:"https://127.0.0.1:55090", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\force-systemd-env-20210310201637-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\force-systemd-env-20210310201637-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 20:30:48.681323    6776 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "force-systemd-env-20210310201637-6496" rescaled to 1
	I0310 20:30:48.682057    6776 start.go:203] Will wait 6m0s for node up to 
	I0310 20:30:48.682057    6776 addons.go:381] enableAddons start: toEnable=map[], additional=[]
	I0310 20:30:48.682057    6776 addons.go:58] Setting storage-provisioner=true in profile "force-systemd-env-20210310201637-6496"
	I0310 20:30:48.684230    6776 out.go:129] * Verifying Kubernetes components...
	I0310 20:30:48.682057    6776 addons.go:134] Setting addon storage-provisioner=true in "force-systemd-env-20210310201637-6496"
	W0310 20:30:48.685258    6776 addons.go:143] addon storage-provisioner should already be in state true
	I0310 20:30:48.686252    6776 host.go:66] Checking if "force-systemd-env-20210310201637-6496" exists ...
	I0310 20:30:48.688979    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:30:48.688979    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:30:48.688979    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:30:48.689760    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:30:48.689760    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:30:48.690551    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:30:48.690977    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:30:48.690977    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:30:48.691716    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:30:48.688979    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 20:30:48.693650    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 20:30:48.693650    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:30:48.693650    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:30:48.694880    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:30:48.694880    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:30:48.695518    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:30:48.695972    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:30:48.695972    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:30:48.695972    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:30:48.695972    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:30:48.705745    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:30:48.682057    6776 addons.go:58] Setting default-storageclass=true in profile "force-systemd-env-20210310201637-6496"
	I0310 20:30:48.706852    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:30:48.706852    6776 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-env-20210310201637-6496"
	I0310 20:30:48.706852    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:30:48.708028    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 20:30:48.709263    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:30:48.709263    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:30:48.709263    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:30:48.709263    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:30:48.709741    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:30:48.709741    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:30:48.709741    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:30:48.709741    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:30:48.709741    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:30:48.709741    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:30:48.793231    6776 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 20:30:49.185138    6776 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210310201637-6496 --format={{.State.Status}}
	I0310 20:30:49.191868    6776 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210310201637-6496 --format={{.State.Status}}
	I0310 20:30:49.462153    6776 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.462153    6776 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.465150    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	I0310 20:30:49.466573    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	I0310 20:30:49.471152    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 782.1744ms
	I0310 20:30:49.471152    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	I0310 20:30:49.473154    6776 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.473154    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	I0310 20:30:49.473154    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 784.1768ms
	I0310 20:30:49.473154    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	I0310 20:30:49.480155    6776 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.481179    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 792.2021ms
	I0310 20:30:49.481179    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	I0310 20:30:49.481179    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	I0310 20:30:49.511232    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 821.4734ms
	I0310 20:30:49.511778    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	I0310 20:30:49.633851    6776 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.637060    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	I0310 20:30:49.637893    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 947.3445ms
	I0310 20:30:49.637893    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	I0310 20:30:49.771750    6776 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.772308    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	I0310 20:30:49.773158    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 1.0771883s
	I0310 20:30:49.773158    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	I0310 20:30:49.886683    6776 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.887555    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	I0310 20:30:49.887869    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.1926779s
	I0310 20:30:49.888523    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	I0310 20:30:49.923659    6776 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.923869    6776 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.924365    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	I0310 20:30:49.924721    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 1.2287526s
	I0310 20:30:49.924721    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	I0310 20:30:49.927531    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	I0310 20:30:49.928627    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 1.2376538s
	I0310 20:30:49.928627    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	I0310 20:30:49.943070    6776 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.943898    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	I0310 20:30:49.944072    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.2481031s
	I0310 20:30:49.944072    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	I0310 20:30:49.948891    6776 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.949206    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	I0310 20:30:49.949700    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.2560536s
	I0310 20:30:49.949700    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	I0310 20:30:49.977449    6776 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.978058    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	I0310 20:30:49.978724    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.2729828s
	I0310 20:30:49.978724    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	I0310 20:30:49.989663    6776 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:49.990158    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	I0310 20:30:49.990473    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.2945043s
	I0310 20:30:49.990473    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	I0310 20:30:50.009399    6776 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.009933    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	I0310 20:30:50.013508    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.3210383s
	I0310 20:30:50.013886    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	I0310 20:30:50.093131    6776 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.094339    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	I0310 20:30:50.094799    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.3872845s
	I0310 20:30:50.094799    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	I0310 20:30:50.113098    6776 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.113420    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	I0310 20:30:50.113730    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.4194142s
	I0310 20:30:50.113730    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	I0310 20:30:50.124940    6776 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.125940    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	I0310 20:30:50.127525    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.4171963s
	I0310 20:30:50.127525    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	I0310 20:30:50.132818    6776 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.133875    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	I0310 20:30:50.134138    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.4431646s
	I0310 20:30:50.134138    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	I0310 20:30:50.137352    6776 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.137821    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	I0310 20:30:50.138366    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.4278083s
	I0310 20:30:50.138366    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	I0310 20:30:50.153675    6776 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.154230    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	I0310 20:30:50.154366    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.4440377s
	I0310 20:30:50.154366    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	I0310 20:30:50.162357    6776 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.162775    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	I0310 20:30:50.163328    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.4545874s
	I0310 20:30:50.163328    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	I0310 20:30:50.175974    6776 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.176673    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	I0310 20:30:50.177014    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.4617237s
	I0310 20:30:50.177014    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	I0310 20:30:50.187749    6776 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.188979    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	I0310 20:30:50.189525    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.4978123s
	I0310 20:30:50.189525    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	I0310 20:30:50.218923    6776 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.219849    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	I0310 20:30:50.220528    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 1.5256522s
	I0310 20:30:50.220528    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	I0310 20:30:50.236746    6776 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.237619    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	I0310 20:30:50.237619    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.5276974s
	I0310 20:30:50.237619    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	I0310 20:30:50.245289    6776 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.245761    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	I0310 20:30:50.246286    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.5565297s
	I0310 20:30:50.246286    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	I0310 20:30:50.271465    6776 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.272036    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	I0310 20:30:50.272320    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 1.5613601s
	I0310 20:30:50.272320    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	I0310 20:30:50.280145    6776 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.280883    6776 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.281398    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	I0310 20:30:50.280145    6776 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.281794    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.5714664s
	I0310 20:30:50.281794    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	I0310 20:30:50.281794    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	I0310 20:30:50.282401    6776 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.282401    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	I0310 20:30:50.282726    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.5717665s
	I0310 20:30:50.282726    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	I0310 20:30:50.283040    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.5875261s
	I0310 20:30:50.283040    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	I0310 20:30:50.283040    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	I0310 20:30:50.283461    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.5898156s
	I0310 20:30:50.283461    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	I0310 20:30:50.293307    6776 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.294380    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	I0310 20:30:50.295324    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.5796023s
	I0310 20:30:50.295559    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	I0310 20:30:50.300305    6776 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.301125    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	I0310 20:30:50.301276    6776 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:30:50.301653    6776 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	I0310 20:30:50.302012    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.5853638s
	I0310 20:30:50.302296    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	I0310 20:30:50.302296    6776 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.5954478s
	I0310 20:30:50.302296    6776 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	I0310 20:30:50.302296    6776 cache.go:73] Successfully saved all images to host disk.
	I0310 20:30:50.330469    6776 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210310201637-6496 --format={{.State.Status}}
	I0310 20:30:50.554957    6776 cli_runner.go:168] Completed: docker container inspect force-systemd-env-20210310201637-6496 --format={{.State.Status}}: (1.3698223s)
	I0310 20:30:50.554957    6776 kapi.go:59] client config for force-systemd-env-20210310201637-6496: &rest.Config{Host:"https://127.0.0.1:55090", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\force-systemd-env-20210310201637-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\force-systemd-env-20210310201637-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 20:30:50.597160    6776 cli_runner.go:168] Completed: docker container inspect force-systemd-env-20210310201637-6496 --format={{.State.Status}}: (1.4048584s)
	I0310 20:30:50.602499    6776 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 20:30:50.603511    6776 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 20:30:50.603747    6776 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0310 20:30:50.614169    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:30:50.855091    6776 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (2.0618649s)
	I0310 20:30:50.870106    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:30:50.894593    6776 addons.go:134] Setting addon default-storageclass=true in "force-systemd-env-20210310201637-6496"
	W0310 20:30:50.894593    6776 addons.go:143] addon default-storageclass should already be in state true
	I0310 20:30:50.895294    6776 host.go:66] Checking if "force-systemd-env-20210310201637-6496" exists ...
	I0310 20:30:50.923965    6776 cli_runner.go:115] Run: docker container inspect force-systemd-env-20210310201637-6496 --format={{.State.Status}}
	I0310 20:30:51.048477    6776 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:30:51.059486    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:30:51.345610    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:30:51.565759    6776 kapi.go:59] client config for force-systemd-env-20210310201637-6496: &rest.Config{Host:"https://127.0.0.1:55090", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\force-systemd-env-20210310201637-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\force-systemd-env-20210310201637-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 20:30:51.581475    6776 api_server.go:48] waiting for apiserver process to appear ...
	I0310 20:30:51.589869    6776 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:30:51.605905    6776 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	I0310 20:30:51.605905    6776 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0310 20:30:51.613985    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:30:51.743943    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:30:52.227686    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:30:58.593142    6776 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 20:31:02.823350    6776 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (11.2332581s)
	I0310 20:31:02.823350    6776 api_server.go:68] duration metric: took 14.1413282s to wait for apiserver process to appear ...
	I0310 20:31:02.823350    6776 api_server.go:84] waiting for apiserver healthz status ...
	I0310 20:31:02.823350    6776 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55090/healthz ...
	I0310 20:31:03.712178    6776 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0310 20:31:04.574781    6776 api_server.go:241] https://127.0.0.1:55090/healthz returned 200:
	ok
	I0310 20:31:04.637080    6776 api_server.go:137] control plane version: v1.20.2
	I0310 20:31:04.637080    6776 api_server.go:127] duration metric: took 1.8137339s to wait for apiserver health ...
	I0310 20:31:04.637080    6776 system_pods.go:41] waiting for kube-system pods to appear ...
	I0310 20:31:04.977040    6776 system_pods.go:57] 4 kube-system pods found
	I0310 20:31:04.977547    6776 system_pods.go:59] "etcd-force-systemd-env-20210310201637-6496" [1ccaf8f3-a5f0-492b-9e48-ab2dea00af6d] Running
	I0310 20:31:04.977547    6776 system_pods.go:59] "kube-apiserver-force-systemd-env-20210310201637-6496" [c83b2cd8-ff78-4f97-9d95-6ea80dc1f262] Running
	I0310 20:31:04.977547    6776 system_pods.go:59] "kube-controller-manager-force-systemd-env-20210310201637-6496" [20c7d532-3b57-47d2-adea-ba8abac9cb6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0310 20:31:04.977547    6776 system_pods.go:59] "kube-scheduler-force-systemd-env-20210310201637-6496" [b85f3d2b-ed69-49fb-8aff-ca04efe9d36c] Pending
	I0310 20:31:04.977547    6776 system_pods.go:72] duration metric: took 340.4684ms to wait for pod list to return data ...
	I0310 20:31:04.977821    6776 kubeadm.go:541] duration metric: took 16.2958041s to wait for : map[apiserver:true system_pods:true] ...
	I0310 20:31:04.977821    6776 node_conditions.go:101] verifying NodePressure condition ...
	I0310 20:31:05.130888    6776 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	I0310 20:31:05.130888    6776 node_conditions.go:122] node cpu capacity is 4
	I0310 20:31:05.130888    6776 node_conditions.go:104] duration metric: took 152.8928ms to run NodePressure ...
	I0310 20:31:05.130888    6776 start.go:208] waiting for startup goroutines ...
	I0310 20:32:30.762340    6776 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1m32.1694071s)
	I0310 20:32:35.869578    6776 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1m44.8213393s)
	I0310 20:32:35.870254    6776 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 20:32:35.870254    6776 docker.go:429] minikube-local-cache-test:functional-20210105233232-2512 wasn't preloaded
	I0310 20:32:35.870603    6776 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210304002630-1156 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210126212539-5172 minikube-local-cache-test:functional-20210114204234-6692 minikube-local-cache-test:functional-20210213143925-7440 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210225231842-5736 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210128021318-232 minikube-local-cache-test:functiona
l-20210106002159-6856 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210310191609-6496 minikube-local-cache-test:functional-20210107190945-8748 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:functional-20210308233820-5396 minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210119220838-6552 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210304184021-4052 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210303214129-4588]
	I0310 20:32:35.870254    6776 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1m32.158017s)
	I0310 20:32:35.881148    6776 out.go:129] * Enabled addons: storage-provisioner, default-storageclass
	I0310 20:32:35.881504    6776 addons.go:383] enableAddons completed in 1m47.1993346s
	I0310 20:32:35.964274    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352
	I0310 20:32:35.982159    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052
	I0310 20:32:35.992859    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156
	I0310 20:32:36.028722    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736
	I0310 20:32:36.031733    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432
	I0310 20:32:36.066600    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464
	I0310 20:32:36.068294    6776 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	I0310 20:32:36.091367    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120214442-10992
	I0310 20:32:36.096124    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024
	I0310 20:32:36.139397    6776 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	I0310 20:32:36.164539    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210123004019-5372
	I0310 20:32:36.200182    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396
	I0310 20:32:36.227047    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944
	I0310 20:32:36.243035    6776 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	I0310 20:32:36.275806    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210224014800-800
	I0310 20:32:36.282727    6776 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	I0310 20:32:36.311276    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440
	I0310 20:32:36.330542    6776 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	I0310 20:32:36.331224    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310083645-5040
	I0310 20:32:36.351015    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310191609-6496
	I0310 20:32:36.365151    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210301195830-5700
	I0310 20:32:36.365151    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588
	I0310 20:32:36.394185    6776 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	I0310 20:32:36.418404    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120022529-1140
	I0310 20:32:36.437252    6776 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	I0310 20:32:36.449600    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452
	I0310 20:32:36.456044    6776 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	I0310 20:32:36.488212    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552
	I0310 20:32:36.503046    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172
	I0310 20:32:36.512188    6776 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	I0310 20:32:36.512188    6776 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	W0310 20:32:36.519520    6776 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:32:36.532417    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920
	I0310 20:32:36.549014    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692
	I0310 20:32:36.552597    6776 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	I0310 20:32:36.558560    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210128021318-232
	I0310 20:32:36.599677    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516
	I0310 20:32:36.612852    6776 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	I0310 20:32:36.626560    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520
	I0310 20:32:36.682686    6776 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056
	I0310 20:32:36.701156    6776 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	W0310 20:32:36.709934    6776 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:32:36.718931    6776 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	W0310 20:32:36.727380    6776 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:32:36.789158    6776 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:32:36.814985    6776 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:32:36.883120    6776 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:32:36.883441    6776 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:32:36.893463    6776 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:36.894248    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	I0310 20:32:36.894248    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:32:36.894248    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:32:36.894767    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:32:36.901483    6776 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:36.901483    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	I0310 20:32:36.901768    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:32:36.901768    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:32:36.901768    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:32:36.907958    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	I0310 20:32:36.914847    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:32:36.925577    6776 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:36.925577    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	I0310 20:32:36.925577    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:32:36.925577    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:32:36.926628    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:32:36.950973    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:32:36.969433    6776 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:36.969876    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	I0310 20:32:36.969876    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:32:36.970015    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:32:36.970322    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:32:36.987743    6776 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:36.987743    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	I0310 20:32:36.987743    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:32:36.987743    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:32:36.988461    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:32:36.990082    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:32:37.003625    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 20:32:37.005626    6776 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:37.005626    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	I0310 20:32:37.005626    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:32:37.005626    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:32:37.005626    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:32:37.014631    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:32:37.020684    6776 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 20:32:37.020684    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	I0310 20:32:37.020684    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:32:37.020684    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:32:37.020684    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:32:37.030636    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	W0310 20:32:37.148673    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:37.149678    6776 retry.go:31] will retry after 276.165072ms: ssh: rejected: connect failed (open failed)
	W0310 20:32:37.149678    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:37.149678    6776 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:37.434569    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:37.518040    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.019217    6776 retry.go:31] will retry after 234.428547ms: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.019445    6776 retry.go:31] will retry after 231.159374ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.018895    6776 retry.go:31] will retry after 291.140013ms: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.019445    6776 retry.go:31] will retry after 296.705768ms: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.019445    6776 retry.go:31] will retry after 141.409254ms: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020170    6776 retry.go:31] will retry after 164.129813ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020170    6776 retry.go:31] will retry after 149.242379ms: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.018571    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.018571    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.018571    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.018571    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.018571    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.018571    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.018571    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.018571    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.018571    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.018571    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:32:38.017920    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020540    6776 retry.go:31] will retry after 200.227965ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020540    6776 retry.go:31] will retry after 253.803157ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020540    6776 retry.go:31] will retry after 328.409991ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020540    6776 retry.go:31] will retry after 178.565968ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020540    6776 retry.go:31] will retry after 220.164297ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020540    6776 retry.go:31] will retry after 204.514543ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020886    6776 retry.go:31] will retry after 242.222461ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020886    6776 retry.go:31] will retry after 195.758538ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020886    6776 retry.go:31] will retry after 198.275464ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020886    6776 retry.go:31] will retry after 294.771169ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020886    6776 retry.go:31] will retry after 175.796719ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020886    6776 retry.go:31] will retry after 179.638263ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.021264    6776 retry.go:31] will retry after 340.62286ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020886    6776 retry.go:31] will retry after 215.217854ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.020886    6776 retry.go:31] will retry after 267.668319ms: ssh: rejected: connect failed (open failed)
	I0310 20:32:38.153132    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:38.153432    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:38.175532    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.193853    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.199509    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.222824    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.226112    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.227115    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.240525    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.241132    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.249866    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.254665    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.264839    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.271894    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.275432    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.281194    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.284395    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.316182    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.316518    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.337341    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.343456    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.349017    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.373412    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:38.380796    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:39.935760    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7414738s)
	I0310 20:32:39.936079    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.034834    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6974963s)
	I0310 20:32:40.034834    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.038446    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.766556s)
	I0310 20:32:40.038705    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.038705    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7977765s)
	I0310 20:32:40.038705    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.083869    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7030763s)
	I0310 20:32:40.084307    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.100890    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7272594s)
	I0310 20:32:40.101366    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.120831    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8709685s)
	I0310 20:32:40.121396    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.161104    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.919976s)
	I0310 20:32:40.161621    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.182619    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9555086s)
	I0310 20:32:40.183884    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.217774    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9949541s)
	I0310 20:32:40.218256    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.218256    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9370668s)
	I0310 20:32:40.218256    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.228271    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9634354s)
	I0310 20:32:40.228819    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.239229    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9548377s)
	I0310 20:32:40.239580    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.243008    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9264938s)
	I0310 20:32:40.243008    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.254153    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0544353s)
	I0310 20:32:40.254153    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.254942    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0794143s)
	I0310 20:32:40.254942    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.259924    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0338172s)
	I0310 20:32:40.260259    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.348142    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0727146s)
	I0310 20:32:40.348397    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.370972    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.1163114s)
	I0310 20:32:40.372122    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.377938    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0617606s)
	I0310 20:32:40.380789    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.382391    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0389396s)
	I0310 20:32:40.382660    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:40.403520    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0542151s)
	I0310 20:32:40.403889    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	W0310 20:32:42.162539    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:42.162539    6776 retry.go:31] will retry after 198.278561ms: ssh: handshake failed: EOF
	W0310 20:32:42.279913    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:42.279913    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:42.280897    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	I0310 20:32:42.290830    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	W0310 20:32:42.317086    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:42.317779    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: "minikube-local-cache-test:functional-20210119220838-6552" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:42.317779    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:32:42.317779    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:32:42.317779    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	W0310 20:32:42.317779    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:42.318479    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: "minikube-local-cache-test:functional-20210306072141-12056" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:42.318479    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:32:42.318479    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:32:42.318479    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:32:42.336822    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:32:42.342529    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:32:42.343524    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:42.350531    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	W0310 20:32:42.379101    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:42.379101    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:42.379501    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	I0310 20:32:42.396589    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:43.004326    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:43.038600    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:43.083308    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:43.118912    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	W0310 20:32:43.170041    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:43.170041    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:43.170733    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	I0310 20:32:43.177616    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	W0310 20:32:43.747148    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:43.747148    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: "minikube-local-cache-test:functional-20210303214129-4588" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:43.747148    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:32:43.747148    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:32:43.747607    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	W0310 20:32:43.747148    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:43.751534    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: "minikube-local-cache-test:functional-20210310083645-5040" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 20:32:43.751534    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:32:43.751534    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:32:43.751534    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:32:43.771632    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	I0310 20:32:43.771632    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 20:32:43.776952    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:43.793495    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:43.801897    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	W0310 20:32:43.912858    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:43.913648    6776 retry.go:31] will retry after 199.270641ms: ssh: handshake failed: EOF
	W0310 20:32:43.939246    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:43.939246    6776 retry.go:31] will retry after 313.143259ms: ssh: handshake failed: EOF
	I0310 20:32:44.412149    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:44.433820    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:45.874302    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874302    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874302    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874302    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:32:45.874302    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874302    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:32:45.874302    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:32:45.874302    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:32:45.874302    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874302    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874302    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:32:45.874630    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874302    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874630    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 20:32:45.874630    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 20:32:45.874630    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:32:45.874630    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:32:45.874630    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 20:32:45.874302    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:32:45.877759    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:32:45.874302    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:32:45.878714    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:32:45.878714    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:32:45.874630    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:32:45.879325    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.879744    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 20:32:45.879744    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 20:32:45.879744    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.880114    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:32:45.880114    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.880114    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 20:32:45.880114    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:32:45.880114    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.880705    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:32:45.880705    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:32:45.880418    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.881143    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:32:45.881143    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:32:45.881512    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874630    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874302    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874302    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.874302    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:32:45.880114    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:32:45.874630    6776 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 20:32:45.880921    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:32:45.882331    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:32:45.881512    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:32:45.882477    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:32:45.882477    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:32:45.881867    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:32:45.881512    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:32:45.882923    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:32:45.881867    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:32:45.883345    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:32:45.883345    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:32:45.881867    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 20:32:45.883345    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 20:32:45.883345    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:32:45.881867    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:32:45.881867    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:32:45.881867    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:32:45.881867    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:32:45.881867    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:32:45.881867    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:32:45.882477    6776 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:32:45.883697    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:32:45.883697    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:32:45.884005    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:32:45.884005    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:32:45.883697    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:32:45.882923    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:32:45.883345    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 20:32:45.882923    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:32:45.883697    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:32:45.886508    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:32:45.883697    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:32:45.886508    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:32:45.883697    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:32:45.887326    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:32:45.883697    6776 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:32:45.887810    6776 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 -> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:32:46.068033    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:32:46.081861    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 20:32:46.111551    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 20:32:46.231750    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:32:46.304048    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 20:32:46.318847    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 20:32:46.354645    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 20:32:46.355286    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 20:32:46.363554    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:32:46.364808    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 20:32:46.366174    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:32:46.406940    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.408742    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.409403    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:32:46.418697    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 20:32:46.443735    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.462541    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:32:46.474393    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 20:32:46.478343    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	I0310 20:32:46.488015    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.525330    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 20:32:46.526652    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.526652    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.530671    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.531676    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.531676    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.532671    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.533642    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.535682    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.535682    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:32:46.542254    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:32:46.542738    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	I0310 20:32:46.543204    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.543204    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.543631    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	W0310 20:32:46.545856    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:46.556282    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.556450    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:32:46.557163    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:32:46.558009    6776 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	I0310 20:32:46.560798    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.580519    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.583295    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.594573    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.596006    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.605401    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:46.608797    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:32:47.885715    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.4769761s)
	I0310 20:32:47.886127    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:47.886712    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.3986991s)
	I0310 20:32:47.887712    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:47.978268    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.4443509s)
	I0310 20:32:47.978268    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.106559    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6996221s)
	I0310 20:32:48.108560    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.189393    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6627443s)
	I0310 20:32:48.190062    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.218506    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.6868339s)
	I0310 20:32:48.219034    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.307999    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7813513s)
	I0310 20:32:48.308791    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.309589    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7663884s)
	I0310 20:32:48.310019    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.319750    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7141714s)
	I0310 20:32:48.320459    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.337368    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7937415s)
	I0310 20:32:48.337518    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.345236    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.802036s)
	I0310 20:32:48.345736    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.382186    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.7876169s)
	I0310 20:32:48.382876    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.392086    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8115707s)
	I0310 20:32:48.392857    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.429707    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8464154s)
	I0310 20:32:48.429956    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.436193    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9035253s)
	I0310 20:32:48.436861    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.471534    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9152563s)
	I0310 20:32:48.472040    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9403678s)
	I0310 20:32:48.474443    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.474943    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.479574    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9486279s)
	I0310 20:32:48.479574    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.490101    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.8940986s)
	I0310 20:32:48.490461    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.492302    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9566242s)
	I0310 20:32:48.492573    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.542464    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (2.0987331s)
	I0310 20:32:48.543122    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.556502    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9477091s)
	I0310 20:32:48.557155    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:32:48.559617    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.9977489s)
	I0310 20:32:48.559913    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	W0310 20:32:57.532360    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:57.532360    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:57.532360    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	W0310 20:32:58.779029    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:58.779489    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:58.779703    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	W0310 20:32:59.429833    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:32:59.430217    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:32:59.430217    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	W0310 20:33:00.305148    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:33:00.306266    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:33:00.306495    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	W0310 20:33:00.799167    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:33:00.799991    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:33:00.799991    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024 (4096 bytes)
	W0310 20:33:06.840464    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:33:06.840464    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:33:06.840464    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	W0310 20:33:06.840969    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:33:06.841408    6776 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 20:33:06.841408    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464 (4096 bytes)
	I0310 20:33:32.155785    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:33:32.168512    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 20:35:31.144714    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052: (2m55.1628783s)
	I0310 20:35:31.144878    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156: (2m55.1521366s)
	I0310 20:35:31.145232    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736: (2m55.1168331s)
	I0310 20:35:31.145377    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432: (2m55.1139673s)
	I0310 20:35:31.145681    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352: (2m55.1815288s)
	I0310 20:35:31.145963    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464: (2m55.0796868s)
	I0310 20:35:42.299353    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396: (3m6.0987053s)
	I0310 20:35:42.299845    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210123004019-5372: (3m6.1356474s)
	I0310 20:35:42.299845    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024: (3m6.2040624s)
	I0310 20:35:42.299845    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210224014800-800: (3m6.0243809s)
	I0310 20:35:42.299845    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120214442-10992: (3m6.2088202s)
	I0310 20:35:42.300696    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (2m56.069267s)
	I0310 20:35:42.301017    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440 (4096 bytes)
	I0310 20:35:42.301617    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (2m56.2339048s)
	I0310 20:35:42.302058    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944 (4096 bytes)
	I0310 20:35:42.302724    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440: (3m5.9917884s)
	I0310 20:36:14.027456    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310191609-6496: (3m37.6768335s)
	I0310 20:36:33.516580    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692: (3m56.9677852s)
	I0310 20:36:33.643789    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520: (3m57.017057s)
	I0310 20:36:36.058772    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (3m59.1082244s)
	I0310 20:36:36.059528    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944: (3m59.832152s)
	I0310 20:36:36.059681    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	I0310 20:36:36.060014    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (3m53.7234198s)
	I0310 20:36:36.060134    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920: (3m59.5277459s)
	I0310 20:36:36.060305    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552 (4096 bytes)
	I0310 20:36:36.061239    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210301195830-5700: (3m59.6963564s)
	I0310 20:36:36.069584    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: (3m59.1545403s)
	I0310 20:36:36.070092    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	I0310 20:36:36.188378    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120022529-1140: (3m59.7698302s)
	I0310 20:37:01.611279    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452: (4m25.162145s)
	I0310 20:37:01.611680    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210128021318-232: (4m25.053586s)
	I0310 20:37:01.612173    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: (4m24.7046799s)
	I0310 20:37:01.612612    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	I0310 20:37:01.612612    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: (4m19.270537s)
	I0310 20:37:01.612612    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (4m24.6227714s)
	I0310 20:37:01.613279    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	I0310 20:37:01.613059    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056 (4096 bytes)
	I0310 20:37:01.613500    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: (4m17.8423194s)
	I0310 20:37:01.613279    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (4m15.2501713s)
	I0310 20:37:01.613500    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: (3m29.445344s)
	I0310 20:37:01.613500    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 from cache
	I0310 20:37:01.613500    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 20:37:01.613938    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: (4m15.2953086s)
	I0310 20:37:01.613710    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052 (4096 bytes)
	I0310 20:37:01.613279    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: (4m15.248917s)
	I0310 20:37:01.614163    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: (4m15.0718711s)
	I0310 20:37:01.614379    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372 (4096 bytes)
	I0310 20:37:01.613279    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172: (4m25.1107s)
	I0310 20:37:01.613710    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: (4m15.1397626s)
	I0310 20:37:01.614898    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	I0310 20:37:01.613710    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: (4m15.3101077s)
	I0310 20:37:01.614379    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992 (4096 bytes)
	I0310 20:37:01.615367    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: (4m15.5339527s)
	I0310 20:37:01.614379    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	I0310 20:37:01.614898    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (4m17.8437178s)
	I0310 20:37:01.615605    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: (4m15.1377077s)
	I0310 20:37:01.615605    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040 (4096 bytes)
	I0310 20:37:01.615605    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692 (4096 bytes)
	I0310 20:37:01.615133    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (4m15.5040279s)
	I0310 20:37:01.615133    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (4m15.2609337s)
	I0310 20:37:01.615367    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520 (4096 bytes)
	I0310 20:37:01.616061    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800 (4096 bytes)
	I0310 20:37:01.615844    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	I0310 20:37:01.616061    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700 (4096 bytes)
	I0310 20:37:01.622680    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588 (4096 bytes)
	I0310 20:37:01.623829    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	W0310 20:37:02.058669    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:37:02.058973    6776 retry.go:31] will retry after 176.645665ms: ssh: rejected: connect failed (open failed)
	W0310 20:37:02.058973    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:37:02.058973    6776 retry.go:31] will retry after 341.333754ms: ssh: rejected: connect failed (open failed)
	W0310 20:37:02.058973    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:37:02.058973    6776 retry.go:31] will retry after 299.179792ms: ssh: rejected: connect failed (open failed)
	W0310 20:37:02.058973    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:37:02.060149    6776 retry.go:31] will retry after 255.955077ms: ssh: rejected: connect failed (open failed)
	W0310 20:37:02.059257    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:37:02.062077    6776 retry.go:31] will retry after 132.07577ms: ssh: rejected: connect failed (open failed)
	W0310 20:37:02.059257    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:37:02.059510    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:37:02.062289    6776 retry.go:31] will retry after 276.81336ms: ssh: rejected: connect failed (open failed)
	I0310 20:37:02.062289    6776 retry.go:31] will retry after 164.582069ms: ssh: rejected: connect failed (open failed)
	W0310 20:37:02.059510    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:37:02.062289    6776 retry.go:31] will retry after 368.810405ms: ssh: rejected: connect failed (open failed)
	W0310 20:37:02.059510    6776 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:37:02.062289    6776 retry.go:31] will retry after 144.863405ms: ssh: rejected: connect failed (open failed)
	I0310 20:37:02.206271    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:02.217601    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:02.255263    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:02.259851    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:02.338407    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:02.352005    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:02.378923    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:02.427498    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:02.445314    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:03.213777    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:03.292949    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.0329604s)
	I0310 20:37:03.293172    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:03.299063    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.0927932s)
	I0310 20:37:03.299478    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:03.317008    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:03.324420    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:03.361050    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:03.385391    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.1677915s)
	I0310 20:37:03.385773    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:03.416233    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:03.453874    6776 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496: (1.0263776s)
	I0310 20:37:03.454155    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:14.460532    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: (4m27.9029881s)
	I0310 20:37:14.461109    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396 (4096 bytes)
	I0310 20:37:14.461917    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (4m28.1068114s)
	I0310 20:37:14.462657    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	I0310 20:37:14.462657    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: (4m27.937792s)
	I0310 20:37:14.462861    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	I0310 20:37:14.463501    6776 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: (4m28.0452694s)
	I0310 20:37:14.463840    6776 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352 (4096 bytes)
	I0310 20:37:14.478345    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:14.478598    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:14.480811    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:14.488463    6776 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20210310201637-6496
	I0310 20:37:14.541404    6776 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516: (4m37.9417765s)
	I0310 20:37:15.271567    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:15.272814    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:15.276254    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	I0310 20:37:15.302226    6776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55096 SSHKeyPath:C:\Users\jenkins\.minikube\machines\force-systemd-env-20210310201637-6496\id_rsa Username:docker}
	W0310 20:37:26.612271    6776 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 20:38:32.897979    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: (1m31.2741481s)
	I0310 20:38:32.898244    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 from cache
	I0310 20:38:32.898670    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:38:32.916866    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 20:38:56.333607    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: (23.4164373s)
	I0310 20:38:56.333607    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 from cache
	I0310 20:38:56.333607    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:38:56.341535    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 20:39:20.502401    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (24.159974s)
	I0310 20:39:20.502401    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 from cache
	I0310 20:39:20.502401    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:39:20.523875    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 20:39:37.049133    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: (16.5251602s)
	I0310 20:39:37.049133    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 from cache
	I0310 20:39:37.049133    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:39:37.058686    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 20:39:49.100357    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (12.0416883s)
	I0310 20:39:49.100357    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 from cache
	I0310 20:39:49.100619    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:39:49.110638    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 20:40:11.836483    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: (22.7258798s)
	I0310 20:40:11.836483    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 from cache
	I0310 20:40:11.836483    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:40:11.848418    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 20:41:03.249697    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: (51.4013566s)
	I0310 20:41:03.249697    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 from cache
	I0310 20:41:03.249697    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:41:03.262479    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 20:42:29.112922    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: (1m25.8502131s)
	I0310 20:42:29.112922    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 from cache
	I0310 20:42:29.113323    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:42:29.124191    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 20:43:02.018131    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (32.8923025s)
	I0310 20:43:02.018131    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 from cache
	I0310 20:43:02.018131    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:43:02.031430    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 20:43:26.007480    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (23.976084s)
	I0310 20:43:26.008152    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 from cache
	I0310 20:43:26.008554    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:43:26.022245    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 20:43:57.799249    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (31.7765479s)
	I0310 20:43:57.799249    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 from cache
	I0310 20:43:57.800007    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:43:57.823849    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 20:44:47.948070    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: (50.1240258s)
	I0310 20:44:47.948218    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 from cache
	I0310 20:44:47.948376    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:44:47.956634    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 20:45:14.587574    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (26.6305168s)
	I0310 20:45:14.587574    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 from cache
	I0310 20:45:14.587937    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:45:14.596952    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 20:45:39.199048    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (24.6021279s)
	I0310 20:45:39.199048    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 from cache
	I0310 20:45:39.199048    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:45:39.210230    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 20:45:59.071883    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (19.8616798s)
	I0310 20:45:59.071883    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 from cache
	I0310 20:45:59.071883    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 20:45:59.079230    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:133: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-20210310201637-6496 --memory=1800 --alsologtostderr -v=5 --driver=docker" : exit status 1

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20210310201637-6496 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:99: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-env-20210310201637-6496 ssh "docker info --format {{.CgroupDriver}}": context deadline exceeded (1.0422ms)

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:101: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-env-20210310201637-6496 ssh \"docker info --format {{.CgroupDriver}}\"": context deadline exceeded
docker_test.go:104: expected systemd cgroup driver, got: 

                                                
                                                
=== CONT  TestForceSystemdEnv
panic.go:617: *** TestForceSystemdEnv FAILED at 2021-03-10 20:46:40.1198287 +0000 GMT m=+6139.822035001
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:227: (dbg) Run:  docker inspect force-systemd-env-20210310201637-6496

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:231: (dbg) docker inspect force-systemd-env-20210310201637-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "52484bfdcd2bb2827394424524d736e1feaefb80c6f0b1e4841f6b9dc342a60e",
	        "Created": "2021-03-10T20:16:56.8741549Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 123824,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:16:59.7376845Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/52484bfdcd2bb2827394424524d736e1feaefb80c6f0b1e4841f6b9dc342a60e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/52484bfdcd2bb2827394424524d736e1feaefb80c6f0b1e4841f6b9dc342a60e/hostname",
	        "HostsPath": "/var/lib/docker/containers/52484bfdcd2bb2827394424524d736e1feaefb80c6f0b1e4841f6b9dc342a60e/hosts",
	        "LogPath": "/var/lib/docker/containers/52484bfdcd2bb2827394424524d736e1feaefb80c6f0b1e4841f6b9dc342a60e/52484bfdcd2bb2827394424524d736e1feaefb80c6f0b1e4841f6b9dc342a60e-json.log",
	        "Name": "/force-systemd-env-20210310201637-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": [
	            "921e462ab72255b76a14cd3ec6af61da9111afdd83fce19694783ee483a25137"
	        ],
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-20210310201637-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-20210310201637-6496",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 1887436800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 1887436800,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7471d2ed11b82a3afdd960719b14bb7102a004d9e2ab8500b466dc1134999cb9-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7471d2ed11b82a3afdd960719b14bb7102a004d9e2ab8500b466dc1134999cb9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7471d2ed11b82a3afdd960719b14bb7102a004d9e2ab8500b466dc1134999cb9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7471d2ed11b82a3afdd960719b14bb7102a004d9e2ab8500b466dc1134999cb9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-20210310201637-6496",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-20210310201637-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-20210310201637-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-20210310201637-6496",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-20210310201637-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d97d9f3b7e16faf438c3ca351aa0b7bbd0224fe5fc64b2cc13701c1cb961f191",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55096"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55092"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55090"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d97d9f3b7e16",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-20210310201637-6496": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.97"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "52484bfdcd2b",
	                        "force-systemd-env-20210310201637-6496"
	                    ],
	                    "NetworkID": "17dfaca07aa7a7d2be4fd39bae60bce3613009da680318fad8f78aeed68d9463",
	                    "EndpointID": "6a719aeca55593020b1db252937c52887d493da6d29e1f4a51627fdbd079b409",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:61",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20210310201637-6496 -n force-systemd-env-20210310201637-6496

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-20210310201637-6496 -n force-systemd-env-20210310201637-6496: (49.3761401s)
helpers_test.go:240: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20210310201637-6496 logs -n 25

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-env-20210310201637-6496 logs -n 25: exit status 110 (2m9.6175843s)

                                                
                                                
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:17:05 UTC, end at Wed 2021-03-10 20:48:19 UTC. --
	* Mar 10 20:24:51 force-systemd-env-20210310201637-6496 systemd[1]: Starting Docker Application Container Engine...
	* Mar 10 20:24:51 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:51.763450300Z" level=info msg="Starting up"
	* Mar 10 20:24:51 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:51.772166800Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:24:51 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:51.777450200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:24:51 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:51.777691500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:24:51 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:51.777870800Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:24:51 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:51.783057300Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:24:51 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:51.783101200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:24:51 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:51.783136600Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:24:51 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:51.783165400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:24:54 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:54.937775900Z" level=info msg="Loading containers: start."
	* Mar 10 20:24:56 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:56.740534400Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 20:24:57 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:57.403681200Z" level=info msg="Loading containers: done."
	* Mar 10 20:24:57 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:57.702833800Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 20:24:57 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:57.703766200Z" level=info msg="Daemon has completed initialization"
	* Mar 10 20:24:57 force-systemd-env-20210310201637-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 20:24:58 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:58.214131400Z" level=info msg="API listen on [::]:2376"
	* Mar 10 20:24:58 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:24:58.360056900Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 20:28:45 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:28:45.580747900Z" level=error msg="d528b1a5cf0541eccda9cd60cebfe3b2385fb5e28237253cd9b0d3608ac8b068 cleanup: failed to delete container from containerd: cannot delete running task d528b1a5cf0541eccda9cd60cebfe3b2385fb5e28237253cd9b0d3608ac8b068: failed precondition"
	* Mar 10 20:30:30 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:30:30.555018600Z" level=error msg="Handler for GET /v1.40/containers/5f4642bd4ed0807be79ced853fec45f42deb3a1f9b0530ddd1782785977532e5/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	* Mar 10 20:30:30 force-systemd-env-20210310201637-6496 dockerd[878]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	* Mar 10 20:37:27 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:37:27.143605400Z" level=info msg="ignoring event" container=88d467af7083ca40ace915a87206c7cde98a5c4843bd71e9b7da2ece16034000 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:37:54 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:37:54.655840900Z" level=info msg="ignoring event" container=bfc799a5659be9402828798402d1cef5d8adc2c72f8a7eee69727d7202016659 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:41:40 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:41:40.457289700Z" level=info msg="ignoring event" container=07386471f765d8f7b409331ee62ff36b1755c4048013e4c0ab43d75a83e48768 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:47:07 force-systemd-env-20210310201637-6496 dockerd[878]: time="2021-03-10T20:47:07.677197500Z" level=info msg="ignoring event" container=0ae94b4c54652c2da7c7382d45fbb7c22beacae09fdb719dcb40c5f614246506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 8393d589254d5       85069258b98ac       50 seconds ago      Created             storage-provisioner       3                   91af7b2a7034f
	* 0ae94b4c54652       85069258b98ac       5 minutes ago       Exited              storage-provisioner       2                   91af7b2a7034f
	* 4429474d6f454       bfe3a36ebd252       9 minutes ago       Running             coredns                   0                   6d13e33364814
	* 8ce32552e7500       43154ddb57a83       13 minutes ago      Running             kube-proxy                0                   28fbe2a1ecf9e
	* 5f4642bd4ed08       a27166429d98e       19 minutes ago      Running             kube-controller-manager   1                   8f0b8f8db53da
	* 64f81c0b478cb       ed2c44fbdd78b       21 minutes ago      Running             kube-scheduler            0                   c91cc9eab36a1
	* 6bffc962fb06a       a8c2fdb8bf76e       21 minutes ago      Running             kube-apiserver            0                   3a553b477322b
	* bbcfa75fc640d       0369cf4303ffd       21 minutes ago      Running             etcd                      0                   ec16420ba4b7f
	* 
	* ==> coredns [4429474d6f45] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               force-systemd-env-20210310201637-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=force-systemd-env-20210310201637-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=force-systemd-env-20210310201637-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T20_30_10_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 20:28:38 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  force-systemd-env-20210310201637-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 20:48:39 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 20:44:38 +0000   Wed, 10 Mar 2021 20:44:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 20:44:38 +0000   Wed, 10 Mar 2021 20:44:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 20:44:38 +0000   Wed, 10 Mar 2021 20:44:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 20:44:38 +0000   Wed, 10 Mar 2021 20:44:38 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.97
	*   Hostname:    force-systemd-env-20210310201637-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                55964a48-3e82-4ae7-87d8-0fafd69707ee
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                             ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-74ff55c5b-66jg8                                          100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	*   kube-system                 etcd-force-systemd-env-20210310201637-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	*   kube-system                 kube-apiserver-force-systemd-env-20210310201637-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         19m
	*   kube-system                 kube-controller-manager-force-systemd-env-20210310201637-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	*   kube-system                 kube-proxy-wrv6s                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	*   kube-system                 kube-scheduler-force-systemd-env-20210310201637-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	*   kube-system                 storage-provisioner                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                  From        Message
	*   ----    ------                   ----                 ----        -------
	*   Normal  Starting                 16m                  kubelet     Starting kubelet.
	*   Normal  NodeNotReady             15m                  kubelet     Node force-systemd-env-20210310201637-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  15m                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 11m                  kube-proxy  Starting kube-proxy.
	*   Normal  NodeHasNoDiskPressure    4m11s (x2 over 16m)  kubelet     Node force-systemd-env-20210310201637-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     4m11s (x2 over 16m)  kubelet     Node force-systemd-env-20210310201637-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeReady                4m11s (x2 over 15m)  kubelet     Node force-systemd-env-20210310201637-6496 status is now: NodeReady
	*   Normal  NodeHasSufficientMemory  4m11s (x2 over 16m)  kubelet     Node force-systemd-env-20210310201637-6496 status is now: NodeHasSufficientMemory
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [bbcfa75fc640] <==
	* 2021-03-10 20:47:19.135434 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (210.0716ms) to execute
	* 2021-03-10 20:47:19.151093 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (148.9691ms) to execute
	* 2021-03-10 20:47:19.151445 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (149.2393ms) to execute
	* 2021-03-10 20:47:19.220495 W | etcdserver: request "header:<ID:10490704451482568358 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/force-systemd-env-20210310201637-6496\" mod_revision:843 > success:<request_put:<key:\"/registry/leases/kube-node-lease/force-systemd-env-20210310201637-6496\" value_size:622 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/force-systemd-env-20210310201637-6496\" > >>" with result "size:16" took too long (183.8353ms) to execute
	* 2021-03-10 20:47:23.625056 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:47:23.762368 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (131.81ms) to execute
	* 2021-03-10 20:47:23.785381 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (130.7789ms) to execute
	* 2021-03-10 20:47:30.826985 W | etcdserver: request "header:<ID:10490704451482568405 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:729 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3881 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (105.6115ms) to execute
	* 2021-03-10 20:47:32.069535 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-74ff55c5b-66jg8\" " with result "range_response_count:1 size:4651" took too long (381.0386ms) to execute
	* 2021-03-10 20:47:32.264393 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:47:42.207346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:47:43.493562 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (148.4212ms) to execute
	* 2021-03-10 20:47:46.960049 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (154.6982ms) to execute
	* 2021-03-10 20:47:53.165281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:01.324430 W | etcdserver: read-only range request "key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.3796693s) to execute
	* 2021-03-10 20:48:01.705303 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (990.269ms) to execute
	* 2021-03-10 20:48:04.408509 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:11.760708 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:23.022902 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:23.150320 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-node-lease\" " with result "range_response_count:1 size:271" took too long (109.7086ms) to execute
	* 2021-03-10 20:48:31.770008 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:42.447316 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:48:48.763916 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" " with result "range_response_count:72 size:61569" took too long (103.6127ms) to execute
	* 2021-03-10 20:48:48.889299 W | etcdserver: read-only range request "key:\"/registry/flowschemas/exempt\" " with result "range_response_count:1 size:879" took too long (116.1969ms) to execute
	* 2021-03-10 20:48:51.847026 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  20:48:56 up  1:49,  0 users,  load average: 180.55, 172.44, 136.77
	* Linux force-systemd-env-20210310201637-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [6bffc962fb06] <==
	* Trace[1681171695]: ---"About to convert to expected version" 1320ms (20:47:00.861)
	* Trace[1681171695]: [1.3483021s] [1.3483021s] END
	* I0310 20:47:32.281971       1 trace.go:205] Trace[1467139205]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-66jg8,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.49.97 (10-Mar-2021 20:47:31.627) (total time: 651ms):
	* Trace[1467139205]: ---"About to write a response" 638ms (20:47:00.266)
	* Trace[1467139205]: [651.1388ms] [651.1388ms] END
	* I0310 20:47:54.178610       1 client.go:360] parsed scheme: "passthrough"
	* I0310 20:47:54.179203       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 20:47:54.179249       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 20:47:58.367253       1 trace.go:205] Trace[743295784]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 20:47:57.457) (total time: 909ms):
	* Trace[743295784]: ---"initial value restored" 128ms (20:47:00.585)
	* Trace[743295784]: ---"Transaction prepared" 331ms (20:47:00.916)
	* Trace[743295784]: ---"Transaction committed" 450ms (20:47:00.367)
	* Trace[743295784]: [909.6552ms] [909.6552ms] END
	* I0310 20:48:01.712311       1 trace.go:205] Trace[203978401]: "List etcd3" key:/cronjobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (10-Mar-2021 20:48:00.088) (total time: 1623ms):
	* Trace[203978401]: [1.6237873s] [1.6237873s] END
	* I0310 20:48:01.712511       1 trace.go:205] Trace[851948031]: "List" url:/apis/batch/v1beta1/cronjobs,user-agent:kube-controller-manager/v1.20.2 (linux/amd64) kubernetes/faecb19/system:serviceaccount:kube-system:cronjob-controller,client:192.168.49.97 (10-Mar-2021 20:48:00.088) (total time: 1624ms):
	* Trace[851948031]: ---"Listing from storage done" 1623ms (20:48:00.712)
	* Trace[851948031]: [1.6240975s] [1.6240975s] END
	* I0310 20:48:04.592947       1 trace.go:205] Trace[1183423924]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 20:48:03.426) (total time: 1166ms):
	* Trace[1183423924]: ---"initial value restored" 1015ms (20:48:00.441)
	* Trace[1183423924]: ---"Transaction committed" 134ms (20:48:00.592)
	* Trace[1183423924]: [1.1666035s] [1.1666035s] END
	* I0310 20:48:36.743615       1 client.go:360] parsed scheme: "passthrough"
	* I0310 20:48:36.744099       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 20:48:36.744148       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [5f4642bd4ed0] <==
	* I0310 20:32:16.482504       1 range_allocator.go:373] Set node force-systemd-env-20210310201637-6496 PodCIDR to [10.244.0.0/24]
	* I0310 20:32:16.514560       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 20:32:16.661546       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 20:32:17.461712       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 20:32:18.079009       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 20:32:18.129223       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 20:32:18.129274       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* E0310 20:32:22.405168       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	* I0310 20:32:22.715475       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
	* I0310 20:32:23.629905       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wrv6s"
	* I0310 20:32:24.891211       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-66jg8"
	* E0310 20:32:26.661436       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"808ead4e-fc1b-4f6e-a2c9-c76c9fdf0236", ResourceVersion:"283", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751005009, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0019351a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019351c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v
1.LabelSelector)(0xc0019351e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.
GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00149ba40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0019
35200), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolum
eSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001935220), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil
), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001935260)}}, Resources:v1
.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00142fe60), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), Restart
Policy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0009fb4a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000428a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runti
meClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000fce8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0009fb4f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* I0310 20:32:56.148791       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 20:33:36.404697       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* I0310 20:42:16.599209       1 event.go:291] "Event occurred" object="force-systemd-env-20210310201637-6496" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node force-systemd-env-20210310201637-6496 status is now: NodeNotReady"
	* I0310 20:42:17.904856       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-force-systemd-env-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:19.553862       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-wrv6s" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:21.170199       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:21.978890       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-force-systemd-env-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:22.162503       1 event.go:291] "Event occurred" object="kube-system/etcd-force-systemd-env-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:22.558762       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 20:42:22.559170       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-force-systemd-env-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:44:43.832209       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	* I0310 20:44:43.832325       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-66jg8" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-66jg8"
	* I0310 20:44:43.937889       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-proxy [8ce32552e750] <==
	* I0310 20:37:26.839801       1 node.go:172] Successfully retrieved node IP: 192.168.49.97
	* I0310 20:37:26.840519       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.97), assume IPv4 operation
	* W0310 20:37:31.178667       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 20:37:31.178892       1 server_others.go:185] Using iptables Proxier.
	* I0310 20:37:31.188328       1 server.go:650] Version: v1.20.2
	* I0310 20:37:31.190022       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 20:37:31.196321       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 20:37:31.196709       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 20:37:31.213417       1 config.go:315] Starting service config controller
	* I0310 20:37:31.233936       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 20:37:31.252815       1 config.go:224] Starting endpoint slice config controller
	* I0310 20:37:31.252845       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 20:37:31.580535       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 20:37:31.580674       1 shared_informer.go:247] Caches are synced for service config 
	* I0310 20:39:28.240106       1 trace.go:205] Trace[929469750]: "iptables restore" (10-Mar-2021 20:39:25.068) (total time: 3171ms):
	* Trace[929469750]: [3.1715913s] [3.1715913s] END
	* I0310 20:42:07.262284       1 trace.go:205] Trace[1511743638]: "iptables Monitor CANARY check" (10-Mar-2021 20:42:01.830) (total time: 5314ms):
	* Trace[1511743638]: [5.3143851s] [5.3143851s] END
	* I0310 20:47:10.015486       1 trace.go:205] Trace[320455354]: "iptables Monitor CANARY check" (10-Mar-2021 20:47:02.983) (total time: 6765ms):
	* Trace[320455354]: [6.7654352s] [6.7654352s] END
	* I0310 20:48:05.125267       1 trace.go:205] Trace[1212733782]: "iptables Monitor CANARY check" (10-Mar-2021 20:48:01.809) (total time: 3315ms):
	* Trace[1212733782]: [3.3157818s] [3.3157818s] END
	* 
	* ==> kube-scheduler [64f81c0b478c] <==
	* E0310 20:28:43.052763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:28:43.072557       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:28:45.261643       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:28:45.887643       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:28:45.970798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:28:47.258911       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:28:47.259035       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:28:47.261375       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:28:47.261743       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:28:47.261772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:28:47.308604       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 20:28:47.469077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:28:47.498863       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 20:28:48.507513       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:28:54.046340       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:28:54.307620       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:28:55.589955       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 20:28:55.895781       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:28:56.457061       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:28:56.505740       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:28:57.212804       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:28:57.342070       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:28:57.609892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* I0310 20:29:13.480321       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* http2: server: error reading preface from client 127.0.0.1:33580: read tcp 127.0.0.1:10259->127.0.0.1:33580: read: connection reset by peer
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:17:05 UTC, end at Wed 2021-03-10 20:49:15 UTC. --
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[628445035]: ---"Objects listed" 34139ms (20:44:00.512)
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[628445035]: [34.1394872s] [34.1394872s] END
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: I0310 20:44:10.535365    3681 trace.go:205] Trace[840134810]: "Reflector ListAndWatch" name:object-"kube-system"/"kube-proxy" (10-Mar-2021 20:43:47.799) (total time: 22735ms):
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[840134810]: ---"Objects listed" 22735ms (20:44:00.535)
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[840134810]: [22.735663s] [22.735663s] END
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: I0310 20:44:10.535668    3681 trace.go:205] Trace[220460744]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Mar-2021 20:43:46.760) (total time: 23775ms):
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[220460744]: ---"Objects listed" 23775ms (20:44:00.535)
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[220460744]: [23.7754234s] [23.7754234s] END
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: I0310 20:44:10.690421    3681 trace.go:205] Trace[1545893772]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:438 (10-Mar-2021 20:43:45.696) (total time: 24994ms):
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[1545893772]: ---"Objects listed" 24993ms (20:44:00.690)
	* Mar 10 20:44:10 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[1545893772]: [24.9940193s] [24.9940193s] END
	* Mar 10 20:44:11 force-systemd-env-20210310201637-6496 kubelet[3681]: I0310 20:44:11.213423    3681 trace.go:205] Trace[672470624]: "Reflector ListAndWatch" name:object-"kube-system"/"coredns" (10-Mar-2021 20:43:43.864) (total time: 27349ms):
	* Mar 10 20:44:11 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[672470624]: ---"Objects listed" 27349ms (20:44:00.213)
	* Mar 10 20:44:11 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[672470624]: [27.3496383s] [27.3496383s] END
	* Mar 10 20:44:11 force-systemd-env-20210310201637-6496 kubelet[3681]: I0310 20:44:11.696730    3681 trace.go:205] Trace[1300996704]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 (10-Mar-2021 20:43:42.319) (total time: 29377ms):
	* Mar 10 20:44:11 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[1300996704]: ---"Objects listed" 29377ms (20:44:00.696)
	* Mar 10 20:44:11 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[1300996704]: [29.3776869s] [29.3776869s] END
	* Mar 10 20:46:29 force-systemd-env-20210310201637-6496 kubelet[3681]: I0310 20:46:29.887756    3681 trace.go:205] Trace[867386394]: "iptables Monitor CANARY check" (10-Mar-2021 20:46:26.906) (total time: 2981ms):
	* Mar 10 20:46:29 force-systemd-env-20210310201637-6496 kubelet[3681]: Trace[867386394]: [2.9810944s] [2.9810944s] END
	* Mar 10 20:47:29 force-systemd-env-20210310201637-6496 kubelet[3681]: I0310 20:47:29.109569    3681 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0ae94b4c54652c2da7c7382d45fbb7c22beacae09fdb719dcb40c5f614246506
	* Mar 10 20:47:29 force-systemd-env-20210310201637-6496 kubelet[3681]: I0310 20:47:29.227001    3681 scope.go:95] [topologymanager] RemoveContainer - Container ID: 07386471f765d8f7b409331ee62ff36b1755c4048013e4c0ab43d75a83e48768
	* Mar 10 20:47:45 force-systemd-env-20210310201637-6496 kubelet[3681]: W0310 20:47:45.189855    3681 pod_container_deletor.go:79] Container "07386471f765d8f7b409331ee62ff36b1755c4048013e4c0ab43d75a83e48768" not found in pod's containers
	* Mar 10 20:48:11 force-systemd-env-20210310201637-6496 kubelet[3681]: W0310 20:48:11.380634    3681 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 20:48:11 force-systemd-env-20210310201637-6496 kubelet[3681]: W0310 20:48:11.441006    3681 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 20:48:26 force-systemd-env-20210310201637-6496 kubelet[3681]: I0310 20:48:26.401161    3681 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0ae94b4c54652c2da7c7382d45fbb7c22beacae09fdb719dcb40c5f614246506
	* 
	* ==> storage-provisioner [0ae94b4c5465] <==
	* 
	* ==> storage-provisioner [8393d589254d] <==
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                       | multinode-20210310194323-6496-m03        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:02:30 GMT | Wed, 10 Mar 2021 20:02:41 GMT |
	|         | multinode-20210310194323-6496-m03        |                                          |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:02:45 GMT | Wed, 10 Mar 2021 20:02:59 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:03:05 GMT | Wed, 10 Mar 2021 20:03:22 GMT |
	|         | multinode-20210310194323-6496            |                                          |                         |         |                               |                               |
	| start   | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:03:23 GMT | Wed, 10 Mar 2021 20:06:49 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |                         |         |                               |                               |
	|         | --wait=true --preload=false              |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0             |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:06:50 GMT | Wed, 10 Mar 2021 20:06:54 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | -- docker pull busybox                   |                                          |                         |         |                               |                               |
	| start   | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:06:54 GMT | Wed, 10 Mar 2021 20:08:51 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |                         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker         |                                          |                         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:08:51 GMT | Wed, 10 Mar 2021 20:08:54 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | -- docker images                         |                                          |                         |         |                               |                               |
	| delete  | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:08:54 GMT | Wed, 10 Mar 2021 20:09:05 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:09:06 GMT | Wed, 10 Mar 2021 20:11:51 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --memory=1900 --driver=docker            |                                          |                         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:52 GMT | Wed, 10 Mar 2021 20:11:54 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --schedule 5m                            |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:57 GMT | Wed, 10 Mar 2021 20:11:59 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | -- sudo systemctl show                   |                                          |                         |         |                               |                               |
	|         | minikube-scheduled-stop --no-page        |                                          |                         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:00 GMT | Wed, 10 Mar 2021 20:12:02 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --schedule 5s                            |                                          |                         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:26 GMT | Wed, 10 Mar 2021 20:12:35 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	| start   | -p                                       | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:37 GMT | Wed, 10 Mar 2021 20:15:24 GMT |
	|         | skaffold-20210310201235-6496             |                                          |                         |         |                               |                               |
	|         | --memory=2600 --driver=docker            |                                          |                         |         |                               |                               |
	| -p      | skaffold-20210310201235-6496             | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:15:28 GMT | Wed, 10 Mar 2021 20:15:41 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:15:46 GMT | Wed, 10 Mar 2021 20:15:57 GMT |
	|         | skaffold-20210310201235-6496             |                                          |                         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20210310201557-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:29 GMT | Wed, 10 Mar 2021 20:16:37 GMT |
	|         | insufficient-storage-20210310201557-6496 |                                          |                         |         |                               |                               |
	| delete  | -p pause-20210310201637-6496             | pause-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:24 GMT | Wed, 10 Mar 2021 20:32:49 GMT |
	| -p      | offline-docker-20210310201637-6496       | offline-docker-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:04 GMT | Wed, 10 Mar 2021 20:33:57 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | offline-docker-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:34:20 GMT | Wed, 10 Mar 2021 20:34:47 GMT |
	|         | offline-docker-20210310201637-6496       |                                          |                         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210310201637-6496   | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:39:52 GMT | Wed, 10 Mar 2021 20:40:10 GMT |
	|         | kubernetes-upgrade-20210310201637-6496   |                                          |                         |         |                               |                               |
	| start   | -p nospam-20210310201637-6496            | nospam-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:38 GMT | Wed, 10 Mar 2021 20:40:39 GMT |
	|         | -n=1 --memory=2250                       |                                          |                         |         |                               |                               |
	|         | --wait=false --driver=docker             |                                          |                         |         |                               |                               |
	| -p      | nospam-20210310201637-6496               | nospam-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:41:42 GMT | Wed, 10 Mar 2021 20:44:25 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p nospam-20210310201637-6496            | nospam-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:44:37 GMT | Wed, 10 Mar 2021 20:44:59 GMT |
	| -p      | docker-flags-20210310201637-6496         | docker-flags-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:47:18 GMT | Wed, 10 Mar 2021 20:49:03 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 20:45:00
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 20:45:00.255205   12928 out.go:239] Setting OutFile to fd 1756 ...
	* I0310 20:45:00.257201   12928 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 20:45:00.257201   12928 out.go:252] Setting ErrFile to fd 1704...
	* I0310 20:45:00.257201   12928 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 20:45:00.274317   12928 out.go:246] Setting JSON to false
	* I0310 20:45:00.277206   12928 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":34566,"bootTime":1615374534,"procs":122,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 20:45:00.277206   12928 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 20:45:00.282208   12928 out.go:129] * [old-k8s-version-20210310204459-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 20:44:56.695722    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:57.216907    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:57.695019    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:58.201057    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:58.705627    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:59.202503    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:59.691383    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:00.205977    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:00.706532    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:01.201838    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:00.284207   12928 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 20:45:00.289221   12928 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 20:45:00.824356   12928 docker.go:119] docker version: linux-20.10.2
	* I0310 20:45:00.836917   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:45:01.866558   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0296417s)
	* I0310 20:45:01.869067   12928 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:true NGoroutines:91 SystemTime:2021-03-10 20:45:01.3861684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:45:01.873618   12928 out.go:129] * Using the docker driver based on user configuration
	* I0310 20:45:01.873897   12928 start.go:276] selected driver: docker
	* I0310 20:45:01.873897   12928 start.go:718] validating driver "docker" against <nil>
	* I0310 20:45:01.873897   12928 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 20:45:03.020868   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:45:04.040267   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0194001s)
	* I0310 20:45:04.040939   12928 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:107 OomKillDisable:true NGoroutines:91 SystemTime:2021-03-10 20:45:03.5743498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:45:04.041630   12928 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 20:45:04.041988   12928 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 20:45:04.042380   12928 cni.go:74] Creating CNI manager for ""
	* I0310 20:45:04.042380   12928 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 20:45:04.042380   12928 start_flags.go:398] config:
	* {Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 20:45:04.046348   12928 out.go:129] * Starting control plane node old-k8s-version-20210310204459-6496 in cluster old-k8s-version-20210310204459-6496
	* I0310 20:45:04.711858   12928 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 20:45:04.711858   12928 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 20:45:04.712630   12928 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	* I0310 20:45:04.712910   12928 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	* I0310 20:45:04.713093   12928 cache.go:54] Caching tarball of preloaded images
	* I0310 20:45:04.713093   12928 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 20:45:04.713312   12928 cache.go:57] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	* I0310 20:45:04.713312   12928 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json ...
	* I0310 20:45:04.713879   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json: {Name:mkb0c21784bf43313016b1fffce280513139bf15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:45:04.728555   12928 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 20:45:04.729352   12928 start.go:313] acquiring machines lock for old-k8s-version-20210310204459-6496: {Name:mk75b6b2b8c7e9551ee9b4fdfdcee0e639bfef0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:45:04.729624   12928 start.go:317] acquired machines lock for "old-k8s-version-20210310204459-6496" in 271.7??s
	* I0310 20:45:04.730175   12928 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	* I0310 20:45:04.730381   12928 start.go:126] createHost starting for "" (driver="docker")
	* I0310 20:45:04.732579   12928 out.go:150] * Creating docker container (CPUs=2, Memory=2200MB) ...
	* I0310 20:45:04.733599   12928 start.go:160] libmachine.API.Create for "old-k8s-version-20210310204459-6496" (driver="docker")
	* I0310 20:45:04.733599   12928 client.go:168] LocalClient.Create starting
	* I0310 20:45:04.734591   12928 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	* I0310 20:45:04.734591   12928 main.go:121] libmachine: Decoding PEM data...
	* I0310 20:45:04.734591   12928 main.go:121] libmachine: Parsing certificate...
	* I0310 20:45:04.734591   12928 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	* I0310 20:45:04.735594   12928 main.go:121] libmachine: Decoding PEM data...
	* I0310 20:45:04.735594   12928 main.go:121] libmachine: Parsing certificate...
	* I0310 20:45:04.764196   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 20:45:01.684381    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:02.186781    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:02.696042    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:03.194415    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:03.700992    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:04.190158    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:04.695427    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:05.198442    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:05.714591    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* W0310 20:45:05.343119   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 20:45:05.347432   12928 network_create.go:240] running [docker network inspect old-k8s-version-20210310204459-6496] to gather additional debugging logs...
	* I0310 20:45:05.347432   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496
	* W0310 20:45:05.940304   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 returned with exit code 1
	* I0310 20:45:05.940304   12928 network_create.go:243] error running [docker network inspect old-k8s-version-20210310204459-6496]: docker network inspect old-k8s-version-20210310204459-6496: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: old-k8s-version-20210310204459-6496
	* I0310 20:45:05.940304   12928 network_create.go:245] output of [docker network inspect old-k8s-version-20210310204459-6496]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: old-k8s-version-20210310204459-6496
	* 
	* ** /stderr **
	* I0310 20:45:05.948873   12928 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 20:45:06.646529   12928 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 20:45:06.647278   12928 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: old-k8s-version-20210310204459-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 20:45:06.654963   12928 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210310204459-6496
	* W0310 20:45:07.277860   12928 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210310204459-6496 returned with exit code 1
	* W0310 20:45:07.278667   12928 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	* I0310 20:45:07.298079   12928 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 20:45:07.911197   12928 cli_runner.go:115] Run: docker volume create old-k8s-version-20210310204459-6496 --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --label created_by.minikube.sigs.k8s.io=true
	* I0310 20:45:08.528283   12928 oci.go:102] Successfully created a docker volume old-k8s-version-20210310204459-6496
	* I0310 20:45:08.536913   12928 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20210310204459-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --entrypoint /usr/bin/test -v old-k8s-version-20210310204459-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 20:45:06.694675    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:07.204731    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:07.705665    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:08.205101    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:08.700491    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:09.192521    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:09.705780    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:10.192570    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:10.702418    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:11.196371    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:08.341903    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (30.6000593s)
	* I0310 20:45:08.341903    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 from cache
	* I0310 20:45:08.342156    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	* I0310 20:45:08.350044    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	* I0310 20:45:10.988019   21276 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m42.7021393s)
	* I0310 20:45:11.013289   21276 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	* I0310 20:45:11.130763   21276 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 20:45:11.813937   21276 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 20:45:11.837929   21276 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 20:45:11.953618   21276 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 20:45:11.953618   21276 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 20:45:13.412101   12928 cli_runner.go:168] Completed: docker run --rm --name old-k8s-version-20210310204459-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --entrypoint /usr/bin/test -v old-k8s-version-20210310204459-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (4.8748804s)
	* I0310 20:45:13.412101   12928 oci.go:106] Successfully prepared a docker volume old-k8s-version-20210310204459-6496
	* I0310 20:45:13.412101   12928 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	* I0310 20:45:13.412101   12928 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	* I0310 20:45:13.412101   12928 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 20:45:13.420956   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:45:13.420956   12928 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210310204459-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* W0310 20:45:14.051843   12928 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210310204459-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	* I0310 20:45:14.052427   12928 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210310204459-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	* stdout:
	* 
	* stderr:
	* docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	* 
	* The notification platform is unavailable.
	* 	���
	* 
	* ���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	*    at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	* �������?8
	* CreateToastNotifier
	* Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	* Windows.UI.Notifications.ToastNotificationManager
	* Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	* ���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	* ���+The notification platform is unavailable.
	* 	������������RestrictedErrorReference
	* 	
���
���������RestrictedCapabilitySid
	* 	������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	* See 'docker run --help'.
	* I0310 20:45:14.432368   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0103571s)
	* I0310 20:45:14.432732   12928 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:109 OomKillDisable:true NGoroutines:94 SystemTime:2021-03-10 20:45:13.9415889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:45:14.442693   12928 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 20:45:11.695535    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:12.194937    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:12.703783    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:13.192468    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:13.698542    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:14.199015    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:14.714365    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:15.196816    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:15.690427    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:16.194198    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:14.587574    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (26.6305168s)
	* I0310 20:45:14.587574    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 from cache
	* I0310 20:45:14.587937    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 20:45:14.596952    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 20:45:15.468738   12928 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.0260466s)
	* I0310 20:45:15.479860   12928 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210310204459-6496 --name old-k8s-version-20210310204459-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --volume old-k8s-version-20210310204459-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 20:45:18.941811   12928 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210310204459-6496 --name old-k8s-version-20210310204459-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --volume old-k8s-version-20210310204459-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (3.4619553s)
	* I0310 20:45:18.951318   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format=
	* I0310 20:45:19.524049   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format=
	* I0310 20:45:16.695445    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:17.197269    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:17.708055    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:18.195592    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:18.701222    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:19.193439    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format=
	* I0310 20:45:20.600504    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format=: (1.4070667s)
	* I0310 20:45:20.600775    9740 logs.go:255] 1 containers: [cc5bf7d7971c]
	* I0310 20:45:20.615980    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format=
	* I0310 20:45:20.157679   12928 cli_runner.go:115] Run: docker exec old-k8s-version-20210310204459-6496 stat /var/lib/dpkg/alternatives/iptables
	* I0310 20:45:21.963622   12928 cli_runner.go:168] Completed: docker exec old-k8s-version-20210310204459-6496 stat /var/lib/dpkg/alternatives/iptables: (1.8059453s)
	* I0310 20:45:21.964004   12928 oci.go:278] the created container "old-k8s-version-20210310204459-6496" has a running status.
	* I0310 20:45:21.964004   12928 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa...
	* I0310 20:45:22.361837   12928 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 20:45:23.348600   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format=
	* I0310 20:45:23.958196   12928 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	* I0310 20:45:23.958196   12928 kic_runner.go:115] Args: [docker exec --privileged old-k8s-version-20210310204459-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	* I0310 20:45:21.664066    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format=: (1.0479358s)
	* I0310 20:45:21.664225    9740 logs.go:255] 1 containers: [d04b7875ec72]
	* I0310 20:45:21.676123    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format=
	* I0310 20:45:23.237959    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format=: (1.5618383s)
	* I0310 20:45:23.238104    9740 logs.go:255] 0 containers: []
	* W0310 20:45:23.238104    9740 logs.go:257] No container was found matching "coredns"
	* I0310 20:45:23.248892    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format=
	* I0310 20:45:25.435301    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format=: (2.1864127s)
	* I0310 20:45:25.435301    9740 logs.go:255] 1 containers: [adb946d74113]
	* I0310 20:45:25.454068    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 20:45:26.736715    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (18.3866955s)
	* I0310 20:45:26.737468    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 from cache
	* I0310 20:45:26.737468    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:45:26.748904    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:45:24.917870   12928 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa...
	* I0310 20:45:25.696877   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format=
	* I0310 20:45:26.334097   12928 machine.go:88] provisioning docker machine ...
	* I0310 20:45:26.334097   12928 ubuntu.go:169] provisioning hostname "old-k8s-version-20210310204459-6496"
	* I0310 20:45:26.345166   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:26.954556   12928 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:45:26.971298   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	* I0310 20:45:26.971298   12928 main.go:121] libmachine: About to run SSH command:
	* sudo hostname old-k8s-version-20210310204459-6496 && echo "old-k8s-version-20210310204459-6496" | sudo tee /etc/hostname
	* I0310 20:45:26.980262   12928 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 20:45:27.207756    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format=: (1.7536894s)
	* I0310 20:45:27.208044    9740 logs.go:255] 0 containers: []
	* W0310 20:45:27.208044    9740 logs.go:257] No container was found matching "kube-proxy"
	* I0310 20:45:27.217871    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 20:45:28.137842    9740 logs.go:255] 0 containers: []
	* W0310 20:45:28.138054    9740 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 20:45:28.147172    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=
	* I0310 20:45:29.350127    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format=: (1.2029559s)
	* I0310 20:45:29.350303    9740 logs.go:255] 0 containers: []
	* W0310 20:45:29.350303    9740 logs.go:257] No container was found matching "storage-provisioner"
	* I0310 20:45:29.359572    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format=
	* I0310 20:45:30.463150    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format=: (1.1025668s)
	* I0310 20:45:30.463288    9740 logs.go:255] 1 containers: [66d44e1d7560]
	* I0310 20:45:30.463288    9740 logs.go:122] Gathering logs for container status ...
	* I0310 20:45:30.463429    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 20:45:30.942198   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210310204459-6496
	* 
	* I0310 20:45:30.949618   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:31.551362   12928 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:45:31.551704   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	* I0310 20:45:31.551979   12928 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\sold-k8s-version-20210310204459-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210310204459-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 old-k8s-version-20210310204459-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 20:45:32.358914   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 20:45:32.358914   12928 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 20:45:32.358914   12928 ubuntu.go:177] setting up certificates
	* I0310 20:45:32.358914   12928 provision.go:83] configureAuth start
	* I0310 20:45:32.370381   12928 cli_runner.go:115] Run: docker container inspect -f "" old-k8s-version-20210310204459-6496
	* I0310 20:45:32.987615   12928 provision.go:137] copyHostCerts
	* I0310 20:45:32.988467   12928 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 20:45:32.988617   12928 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 20:45:32.988818   12928 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 20:45:32.994199   12928 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 20:45:32.994320   12928 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 20:45:32.994911   12928 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 20:45:33.002984   12928 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 20:45:33.003152   12928 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 20:45:33.003729   12928 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 20:45:33.006728   12928 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-20210310204459-6496 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210310204459-6496]
	* I0310 20:45:33.248434   12928 provision.go:165] copyRemoteCerts
	* I0310 20:45:33.258428   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 20:45:33.266434   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:33.881631   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	* I0310 20:45:34.527804   12928 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2693782s)
	* I0310 20:45:34.528542   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 20:45:31.791639    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.328072s)
	* I0310 20:45:31.793003    9740 logs.go:122] Gathering logs for dmesg ...
	* I0310 20:45:31.794771    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	* I0310 20:45:32.152567    9740 logs.go:122] Gathering logs for describe nodes ...
	* I0310 20:45:32.152567    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	* I0310 20:45:35.208075    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.055512s)
	* W0310 20:45:35.208193    9740 logs.go:129] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	* stdout:
	* 
	* stderr:
	* The connection to the server localhost:8443 was refused - did you specify the right host or port?
	*  output: 
	* ** stderr ** 
	* The connection to the server localhost:8443 was refused - did you specify the right host or port?
	* 
	* ** /stderr **
	* I0310 20:45:35.208568    9740 logs.go:122] Gathering logs for kube-apiserver [cc5bf7d7971c] ...
	* I0310 20:45:35.208568    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cc5bf7d7971c"
	* I0310 20:45:34.862013   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 20:45:35.103719   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1277 bytes)
	* I0310 20:45:35.279505   12928 provision.go:86] duration metric: configureAuth took 2.9205948s
	* I0310 20:45:35.279505   12928 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 20:45:35.296522   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:35.956818   12928 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:45:35.957808   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	* I0310 20:45:35.958075   12928 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 20:45:36.590941   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 20:45:36.590941   12928 ubuntu.go:71] root file system type: overlay
	* I0310 20:45:36.600870   12928 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 20:45:36.620128   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:37.213213   12928 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:45:37.214230   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	* I0310 20:45:37.214230   12928 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 20:45:38.141733   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 20:45:38.155124   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:38.755494   12928 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:45:38.756832   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	* I0310 20:45:38.756832   12928 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 20:45:37.880592    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 cc5bf7d7971c": (2.6720271s)
	* I0310 20:45:37.917053    9740 logs.go:122] Gathering logs for kube-controller-manager [66d44e1d7560] ...
	* I0310 20:45:37.918006    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 66d44e1d7560"
	* I0310 20:45:40.285234    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 66d44e1d7560": (2.3672309s)
	* I0310 20:45:40.286872    9740 logs.go:122] Gathering logs for Docker ...
	* I0310 20:45:40.287037    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	* I0310 20:45:40.517277    9740 logs.go:122] Gathering logs for kubelet ...
	* I0310 20:45:40.517456    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	* I0310 20:45:39.199048    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (24.6021279s)
	* I0310 20:45:39.199048    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 from cache
	* I0310 20:45:39.199048    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:45:39.210230    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:45:41.422297    9740 logs.go:122] Gathering logs for etcd [d04b7875ec72] ...
	* I0310 20:45:41.422297    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d04b7875ec72"
	* I0310 20:45:45.676953    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 d04b7875ec72": (4.2546607s)
	* I0310 20:45:45.709061    9740 logs.go:122] Gathering logs for kube-scheduler [adb946d74113] ...
	* I0310 20:45:45.709061    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 adb946d74113"
	* I0310 20:45:49.748208   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 20:45:38.114142000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 20:45:49.748208   12928 machine.go:91] provisioned docker machine in 23.4141415s
	* I0310 20:45:49.748208   12928 client.go:171] LocalClient.Create took 45.0146683s
	* I0310 20:45:49.748208   12928 start.go:168] duration metric: libmachine.API.Create for "old-k8s-version-20210310204459-6496" took 45.0146683s
	* I0310 20:45:49.748208   12928 start.go:267] post-start starting for "old-k8s-version-20210310204459-6496" (driver="docker")
	* I0310 20:45:49.748208   12928 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 20:45:49.758614   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 20:45:49.766365   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:50.110957    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 adb946d74113": (4.4019021s)
	* I0310 20:45:50.023275    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (23.2729983s)
	* I0310 20:45:50.023418    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 from cache
	* I0310 20:45:50.023418    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:45:50.036621    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:45:50.422276   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	* I0310 20:45:50.905494   12928 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.1466426s)
	* I0310 20:45:50.914491   12928 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 20:45:50.966715   12928 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 20:45:50.966715   12928 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 20:45:50.966715   12928 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 20:45:50.966715   12928 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 20:45:50.967000   12928 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 20:45:50.967712   12928 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 20:45:50.969334   12928 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 20:45:50.969334   12928 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 20:45:50.990144   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 20:45:51.083698   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 20:45:51.268611   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 20:45:51.511504   12928 start.go:270] post-start completed in 1.7632983s
	* I0310 20:45:51.550746   12928 cli_runner.go:115] Run: docker container inspect -f "" old-k8s-version-20210310204459-6496
	* I0310 20:45:52.155265   12928 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json ...
	* I0310 20:45:52.188530   12928 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 20:45:52.196673   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:52.867889   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	* I0310 20:45:53.353463   12928 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1647767s)
	* I0310 20:45:53.353463   12928 start.go:129] duration metric: createHost completed in 48.6231448s
	* I0310 20:45:53.354459   12928 start.go:80] releasing machines lock for "old-k8s-version-20210310204459-6496", held for 48.6235465s
	* I0310 20:45:53.362679   12928 cli_runner.go:115] Run: docker container inspect -f "" old-k8s-version-20210310204459-6496
	* I0310 20:45:53.939528   12928 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 20:45:53.949438   12928 ssh_runner.go:149] Run: systemctl --version
	* I0310 20:45:53.953944   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:53.956958   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:54.580367   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	* I0310 20:45:54.615840   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	* I0310 20:45:52.682538    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:45:55.371189    9740 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.6886539s)
	* I0310 20:45:55.371189    9740 api_server.go:68] duration metric: took 1m36.1976237s to wait for apiserver process to appear ...
	* I0310 20:45:55.371189    9740 api_server.go:84] waiting for apiserver healthz status ...
	* I0310 20:45:55.371189    9740 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55130/healthz ...
	* I0310 20:45:54.865625   12928 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 20:45:55.158095   12928 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.2181659s)
	* I0310 20:45:55.168864   12928 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 20:45:55.262746   12928 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 20:45:55.271819   12928 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 20:45:55.396624   12928 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 20:45:55.604086   12928 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 20:45:55.693004   12928 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 20:45:56.722346   12928 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0283425s)
	* I0310 20:45:56.732654   12928 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 20:45:56.839967   12928 ssh_runner.go:149] Run: docker version --format 
	* I0310 20:45:57.520716   12928 out.go:150] * Preparing Kubernetes v1.14.0 on Docker 20.10.3 ...
	* I0310 20:45:57.530110   12928 cli_runner.go:115] Run: docker exec -t old-k8s-version-20210310204459-6496 dig +short host.docker.internal
	* I0310 20:45:58.589310   12928 cli_runner.go:168] Completed: docker exec -t old-k8s-version-20210310204459-6496 dig +short host.docker.internal: (1.0590077s)
	* I0310 20:45:58.589761   12928 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 20:45:58.599975   12928 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 20:45:58.629255   12928 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 20:45:58.790284   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	* I0310 20:45:59.375638   12928 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\client.crt
	* I0310 20:45:59.381369   12928 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\client.key
	* I0310 20:45:59.385304   12928 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	* I0310 20:45:59.385766   12928 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	* I0310 20:45:59.394606   12928 ssh_runner.go:149] Run: docker images --format :
	* I0310 20:45:59.071883    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (19.8616798s)
	* I0310 20:45:59.071883    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 from cache
	* I0310 20:45:59.071883    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 20:45:59.079230    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 20:45:59.844362   12928 docker.go:423] Got preloaded images: 
	* I0310 20:45:59.844362   12928 docker.go:429] k8s.gcr.io/kube-proxy:v1.14.0 wasn't preloaded
	* I0310 20:45:59.855645   12928 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 20:45:59.949427   12928 ssh_runner.go:149] Run: which lz4
	* I0310 20:46:00.005687   12928 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 20:46:00.075060   12928 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 20:46:00.075364   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (488333642 bytes)
	* I0310 20:47:10.617697   12928 docker.go:388] Took 70.619571 seconds to copy over tarball
	* I0310 20:47:10.639143   12928 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	* I0310 20:47:34.237148   10404 out.go:150]   - Generating certificates and keys ...
	* I0310 20:47:34.242767   10404 out.go:150]   - Booting up control plane ...
	* I0310 20:47:34.247785   10404 kubeadm.go:387] StartCluster complete in 10m58.6719909s
	* I0310 20:47:34.257795   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format=
	* I0310 20:47:55.383138   12928 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (44.7436307s)
	* I0310 20:47:55.383138   12928 ssh_runner.go:100] rm: /preloaded.tar.lz4
	* I0310 20:47:57.352441   12928 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 20:47:57.410825   12928 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3123 bytes)
	* I0310 20:47:57.642121   12928 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 20:47:59.081806   12928 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.4396865s)
	* I0310 20:47:59.084048   12928 ssh_runner.go:149] Run: sudo systemctl restart docker
	* I0310 20:48:13.716793   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format=: (39.4587585s)
	* I0310 20:48:13.716793   10404 logs.go:255] 1 containers: [9b71c60e312f]
	* I0310 20:48:13.722356   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format=
	* I0310 20:48:16.306081   12928 ssh_runner.go:189] Completed: sudo systemctl restart docker: (17.2218131s)
	* I0310 20:48:16.318733   12928 ssh_runner.go:149] Run: docker images --format :
	* I0310 20:48:17.207848   12928 docker.go:423] Got preloaded images: -- stdout --
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/kube-proxy:v1.14.0
	* k8s.gcr.io/kube-controller-manager:v1.14.0
	* k8s.gcr.io/kube-apiserver:v1.14.0
	* k8s.gcr.io/kube-scheduler:v1.14.0
	* k8s.gcr.io/coredns:1.3.1
	* k8s.gcr.io/etcd:3.3.10
	* k8s.gcr.io/pause:3.1
	* 
	* -- /stdout --
	* I0310 20:48:17.208570   12928 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 20:48:17.241235   12928 ssh_runner.go:149] Run: docker info --format 
	* I0310 20:48:19.442860   12928 ssh_runner.go:189] Completed: docker info --format : (2.2016277s)
	* I0310 20:48:19.443320   12928 cni.go:74] Creating CNI manager for ""
	* I0310 20:48:19.443320   12928 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 20:48:19.443320   12928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 20:48:19.443567   12928 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210310204459-6496 NodeName:old-k8s-version-20210310204459-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 20:48:19.443998   12928 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta1
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 172.17.0.3
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "old-k8s-version-20210310204459-6496"
	*   kubeletExtraArgs:
	*     node-ip: 172.17.0.3
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta1
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: old-k8s-version-20210310204459-6496
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       listen-metrics-urls: http://127.0.0.1:2381,http://172.17.0.3:2381
	* kubernetesVersion: v1.14.0
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 20:48:19.444438   12928 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20210310204459-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	* I0310 20:48:19.454962   12928 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	* I0310 20:48:19.534133   12928 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 20:48:19.543848   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 20:48:19.611003   12928 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	* I0310 20:48:23.203459   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format=: (9.4808233s)
	* I0310 20:48:23.203459   10404 logs.go:255] 1 containers: [11556200fc81]
	* I0310 20:48:23.218777   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format=
	* I0310 20:48:19.972107   12928 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 20:48:20.275076   12928 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1928 bytes)
	* I0310 20:48:20.500483   12928 ssh_runner.go:149] Run: grep 172.17.0.3	control-plane.minikube.internal$ /etc/hosts
	* I0310 20:48:20.541378   12928 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 20:48:20.668643   12928 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496 for IP: 172.17.0.3
	* I0310 20:48:20.668643   12928 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 20:48:20.668643   12928 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 20:48:20.668643   12928 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\client.key
	* I0310 20:48:20.668643   12928 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0
	* I0310 20:48:20.668643   12928 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
	* I0310 20:48:20.862651   12928 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0 ...
	* I0310 20:48:20.862651   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0: {Name:mk4d990127210c9e93f70bb2fa83fed3ed7d8272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:48:20.886181   12928 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0 ...
	* I0310 20:48:20.886181   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0: {Name:mk3b112be41963d8a84df37233731d1e05b06ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:48:20.895703   12928 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0 -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt
	* I0310 20:48:20.899122   12928 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0 -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key
	* I0310 20:48:20.906198   12928 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key
	* I0310 20:48:20.906198   12928 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt with IP's: []
	* I0310 20:48:21.063572   12928 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt ...
	* I0310 20:48:21.064582   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt: {Name:mkc85b22c9bece2080565bade554ebf8aae7c395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:48:21.073606   12928 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key ...
	* I0310 20:48:21.073606   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key: {Name:mkc400cbeb274a69f5d3aa3f494371d783186217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:48:21.085586   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 20:48:21.085586   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.085586   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 20:48:21.086578   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.086578   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 20:48:21.086578   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.086578   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 20:48:21.087590   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.087590   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 20:48:21.087590   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.087590   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 20:48:21.088583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.088583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 20:48:21.088583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.088583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 20:48:21.089583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.089583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 20:48:21.089583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.089583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 20:48:21.089583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.090584   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 20:48:21.090584   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.090584   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 20:48:21.090584   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.090584   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 20:48:21.091582   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.091582   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 20:48:21.091582   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.091582   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 20:48:21.092581   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.092581   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 20:48:21.092581   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.092581   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 20:48:21.093639   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.093639   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 20:48:21.093639   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.093639   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 20:48:21.093639   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.094657   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 20:48:21.094657   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.094657   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 20:48:21.094657   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.094657   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 20:48:21.095655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.095655   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 20:48:21.095655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.095655   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 20:48:21.096667   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.096667   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 20:48:21.096667   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.096667   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 20:48:21.097630   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.097630   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 20:48:21.097630   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.097630   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 20:48:21.098655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.098655   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 20:48:21.098655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.099460   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 20:48:21.099877   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 20:48:21.104844   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 20:48:21.105242   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 20:48:21.105585   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 20:48:21.105585   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 20:48:21.106404   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 20:48:21.117199   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 20:48:21.513740   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	* I0310 20:48:21.760642   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 20:48:22.037866   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	* I0310 20:48:22.744649   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 20:48:23.068283   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 20:48:23.437363   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 20:48:23.716689   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 20:48:24.144080   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 20:48:24.496544   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 20:48:24.706713   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 20:48:24.128431    9740 api_server.go:241] https://127.0.0.1:55130/healthz returned 403:
	* {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	* W0310 20:48:24.128706    9740 api_server.go:99] status: https://127.0.0.1:55130/healthz returned error 403:
	* {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	* I0310 20:48:24.638787    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format=
	* I0310 20:48:24.888872   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 20:48:25.072441   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 20:48:25.295478   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 20:48:25.539068   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 20:48:25.786296   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 20:48:25.971391   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 20:48:26.186445   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 20:48:26.360798   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 20:48:26.691872   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 20:48:26.874779   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 20:48:27.094519   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 20:48:27.926836   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 20:48:28.188557   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 20:48:28.394647   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 20:48:28.790329   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 20:48:29.094073   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 20:48:29.321739   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 20:48:29.607788   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 20:48:29.785678   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 20:48:30.244451   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format=: (7.025261s)
	* I0310 20:48:30.244451   10404 logs.go:255] 0 containers: []
	* W0310 20:48:30.244451   10404 logs.go:257] No container was found matching "coredns"
	* I0310 20:48:30.254013   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format=
	* I0310 20:48:30.032036   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 20:48:30.338918   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 20:48:30.567680   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 20:48:30.799494   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 20:48:30.986912   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 20:48:31.213564   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 20:48:31.533104   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 20:48:31.806558   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 20:48:32.025579   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 20:48:32.313142   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 20:48:32.820375   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 20:48:33.091682   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 20:48:33.361952   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 20:48:33.546662   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 20:48:33.880265   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 20:48:34.176826   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 20:48:34.443774   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 20:48:33.835176    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format=: (9.1964006s)
	* I0310 20:48:33.835431    9740 logs.go:255] 2 containers: [3d2c98ba1bfd cc5bf7d7971c]
	* I0310 20:48:33.843993    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format=
	* I0310 20:48:39.488216   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format=: (9.2336485s)
	* I0310 20:48:39.488216   10404 logs.go:255] 1 containers: [78ecb22163a7]
	* I0310 20:48:39.496142   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 20:48:34.820050   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 20:48:35.243486   12928 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 20:48:35.500540   12928 ssh_runner.go:149] Run: openssl version
	* I0310 20:48:35.574294   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 20:48:35.650674   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 20:48:35.681694   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 20:48:35.695254   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 20:48:35.748262   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:35.815809   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 20:48:35.910004   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 20:48:35.970953   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 20:48:35.986910   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 20:48:36.035569   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:36.169036   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 20:48:36.248538   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 20:48:36.281799   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 20:48:36.296020   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 20:48:36.339195   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:36.400862   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 20:48:36.510827   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 20:48:36.544150   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 20:48:36.554515   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 20:48:36.603970   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:36.661485   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 20:48:36.741293   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 20:48:36.771506   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 20:48:36.782867   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 20:48:36.831341   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:36.901844   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 20:48:36.960231   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 20:48:36.991039   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 20:48:37.001755   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 20:48:37.120470   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:37.219779   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 20:48:37.350451   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 20:48:37.374271   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 20:48:37.385332   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 20:48:37.493863   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:37.554705   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 20:48:37.655597   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 20:48:37.694141   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 20:48:37.710161   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 20:48:37.762947   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:37.827899   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 20:48:37.894174   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 20:48:37.915678   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 20:48:37.927030   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 20:48:37.967096   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:38.026740   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 20:48:38.090043   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 20:48:38.126200   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 20:48:38.136147   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 20:48:38.239371   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:38.331111   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 20:48:38.413542   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 20:48:38.442408   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 20:48:38.452607   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 20:48:38.499760   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:38.577229   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 20:48:38.641034   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 20:48:38.674442   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 20:48:38.690347   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 20:48:38.757770   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:38.864479   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 20:48:38.942637   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 20:48:39.028587   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 20:48:39.038752   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 20:48:39.107231   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:39.167667   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 20:48:39.232613   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 20:48:39.263457   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 20:48:39.273269   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 20:48:39.334847   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:39.402674   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 20:48:39.571265   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 20:48:39.604484   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 20:48:39.633603   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 20:48:39.805793   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:37.820712    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format=: (3.9764734s)
	* I0310 20:48:37.820804    9740 logs.go:255] 3 containers: [97de25fff1e2 d32313e5411d d04b7875ec72]
	* I0310 20:48:37.830843    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format=
	* I0310 20:48:39.880251   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 20:48:39.959855   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 20:48:40.024594   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 20:48:40.036081   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 20:48:40.106252   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:40.184730   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 20:48:40.351306   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 20:48:40.391835   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 20:48:40.401307   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 20:48:40.477301   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:40.545447   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 20:48:40.618259   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 20:48:40.658602   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 20:48:40.674461   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 20:48:40.736048   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:40.835411   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 20:48:40.908627   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 20:48:40.949747   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 20:48:40.967671   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 20:48:41.352067   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:41.417455   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 20:48:41.517072   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 20:48:41.546197   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 20:48:41.556596   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 20:48:41.613329   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:41.714299   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 20:48:41.825921   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 20:48:41.849980   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 20:48:41.864898   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 20:48:41.922840   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:42.017877   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 20:48:42.178610   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 20:48:42.222069   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 20:48:42.232994   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 20:48:42.311917   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:42.444203   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 20:48:42.551950   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 20:48:42.626121   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 20:48:42.637532   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 20:48:42.693402   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:42.768815   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 20:48:42.854401   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 20:48:42.900184   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 20:48:42.908801   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 20:48:43.052775   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:43.262207   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 20:48:43.395161   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 20:48:43.457844   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 20:48:43.478189   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 20:48:43.562538   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:43.709162   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 20:48:43.876180   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 20:48:43.943286   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 20:48:43.962138   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 20:48:44.029379   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:44.217921   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 20:48:44.301845   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 20:48:44.376495   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 20:48:44.387476   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 20:48:44.456446   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:44.592511   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 20:48:44.700759   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 20:48:44.738480   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 20:48:44.748403   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 20:48:44.805136   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:44.208737    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format=: (6.3777058s)
	* I0310 20:48:44.208873    9740 logs.go:255] 0 containers: []
	* W0310 20:48:44.208873    9740 logs.go:257] No container was found matching "coredns"
	* I0310 20:48:44.219708    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format=
	* I0310 20:48:47.294457   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format=: (7.7983256s)
	* I0310 20:48:47.294457   10404 logs.go:255] 0 containers: []
	* W0310 20:48:47.294457   10404 logs.go:257] No container was found matching "kube-proxy"
	* I0310 20:48:47.302859   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 20:48:44.928726   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 20:48:45.044469   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:48:45.095289   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:48:45.113350   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:48:45.186929   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 20:48:45.321146   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 20:48:45.445664   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 20:48:45.482187   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 20:48:45.502749   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 20:48:45.554720   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:45.629427   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 20:48:45.697187   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 20:48:45.727043   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 20:48:45.738673   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 20:48:45.808330   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:45.893296   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 20:48:45.982785   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 20:48:46.020660   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 20:48:46.035740   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 20:48:46.097117   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:46.169019   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 20:48:46.291358   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 20:48:46.386018   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 20:48:46.420499   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 20:48:46.475764   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:46.630846   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 20:48:46.703689   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 20:48:46.735499   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 20:48:46.754694   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 20:48:46.814457   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:46.903052   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 20:48:46.969551   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 20:48:47.014846   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 20:48:47.025370   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 20:48:47.103567   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:47.194455   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 20:48:47.297186   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 20:48:47.356622   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 20:48:47.360289   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 20:48:47.460715   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:47.546824   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 20:48:47.627035   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 20:48:47.662210   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 20:48:47.673768   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 20:48:47.749535   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:47.806523   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 20:48:47.961579   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 20:48:47.999154   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 20:48:48.008851   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 20:48:48.071015   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:48.172030   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 20:48:48.248251   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 20:48:48.281154   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 20:48:48.296457   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 20:48:48.348775   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:48.439206   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 20:48:48.554057   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 20:48:48.618046   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 20:48:48.632735   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 20:48:48.683261   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 20:48:48.768887   12928 kubeadm.go:385] StartCluster: {Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 20:48:48.775847   12928 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 20:48:49.462903   12928 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 20:48:49.542762   12928 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 20:48:49.646072   12928 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 20:48:49.654944   12928 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 20:48:49.767382   12928 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 20:48:49.767628   12928 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 20:48:51.217661    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format=: (6.9978193s)
	* I0310 20:48:51.217807    9740 logs.go:255] 2 containers: [a45e8b20db73 adb946d74113]
	* I0310 20:48:51.230353    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 20:48:53.196431   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=: (5.8935798s)
	* I0310 20:48:53.196431   10404 logs.go:255] 0 containers: []
	* W0310 20:48:53.196431   10404 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 20:48:53.205549   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=
	* I0310 20:48:53.407075    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format=: (2.1767246s)
	* I0310 20:48:53.407075    9740 logs.go:255] 0 containers: []
	* W0310 20:48:53.407075    9740 logs.go:257] No container was found matching "kube-proxy"
	* I0310 20:48:53.412664    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 20:48:57.371554   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format=: (4.1655837s)
	* I0310 20:48:57.371554   10404 logs.go:255] 0 containers: []
	* W0310 20:48:57.371554   10404 logs.go:257] No container was found matching "storage-provisioner"
	* I0310 20:48:57.378971   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format=
	* I0310 20:48:56.924709    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=: (3.5120496s)
	* I0310 20:48:56.924709    9740 logs.go:255] 0 containers: []
	* W0310 20:48:56.924709    9740 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 20:48:56.935522    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=
	* I0310 20:49:02.670111   10404 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format=: (5.2911473s)
	* I0310 20:49:02.670526   10404 logs.go:255] 1 containers: [03bca09b9bc8]
	* I0310 20:49:02.670526   10404 logs.go:122] Gathering logs for kube-scheduler [78ecb22163a7] ...
	* I0310 20:49:02.670658   10404 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 78ecb22163a7"
	* I0310 20:49:04.112402    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format=: (7.1768899s)
	* I0310 20:49:04.112402    9740 logs.go:255] 0 containers: []
	* W0310 20:49:04.112402    9740 logs.go:257] No container was found matching "storage-provisioner"
	* I0310 20:49:04.124363    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format=
	* I0310 20:49:11.101897    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format=: (6.977543s)
	* I0310 20:49:11.102742    9740 logs.go:255] 2 containers: [c9ee9f47c709 66d44e1d7560]
	* I0310 20:49:11.102742    9740 logs.go:122] Gathering logs for kube-controller-manager [c9ee9f47c709] ...
	* I0310 20:49:11.102742    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 c9ee9f47c709"
	* I0310 20:49:09.728821   10404 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 78ecb22163a7": (7.0573045s)
	* I0310 20:49:09.752679   10404 logs.go:122] Gathering logs for kubelet ...
	* I0310 20:49:09.752679   10404 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	* I0310 20:49:10.945624   10404 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.1929464s)
	* I0310 20:49:11.016923   10404 logs.go:122] Gathering logs for dmesg ...
	* I0310 20:49:11.016923   10404 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	* I0310 20:49:11.986056   10404 logs.go:122] Gathering logs for describe nodes ...
	* I0310 20:49:11.986056   10404 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	* I0310 20:49:14.514482    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 c9ee9f47c709": (3.4117446s)
	* I0310 20:49:14.532865    9740 logs.go:122] Gathering logs for kube-controller-manager [66d44e1d7560] ...
	* I0310 20:49:14.532865    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 66d44e1d7560"
	* I0310 20:49:19.988671    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 66d44e1d7560": (5.4558127s)
	* I0310 20:49:19.989681    9740 logs.go:122] Gathering logs for Docker ...
	* I0310 20:49:19.990189    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	* I0310 20:49:21.166750    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (1.176356s)
	* I0310 20:49:21.171939    9740 logs.go:122] Gathering logs for container status ...
	* I0310 20:49:21.171939    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 20:49:25.093243    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (3.9209946s)
	* I0310 20:49:25.095161    9740 logs.go:122] Gathering logs for dmesg ...
	* I0310 20:49:25.095161    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	* I0310 20:49:26.220092    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (1.124676s)
	* I0310 20:49:26.223717    9740 logs.go:122] Gathering logs for describe nodes ...
	* I0310 20:49:26.223717    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	* I0310 20:49:34.089067   10404 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (22.1030399s)
	* I0310 20:49:34.089067   10404 logs.go:122] Gathering logs for etcd [11556200fc81] ...
	* I0310 20:49:34.089067   10404 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 11556200fc81"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:48:56.476547    5808 out.go:340] unable to execute * 2021-03-10 20:47:19.220495 W | etcdserver: request "header:<ID:10490704451482568358 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/force-systemd-env-20210310201637-6496\" mod_revision:843 > success:<request_put:<key:\"/registry/leases/kube-node-lease/force-systemd-env-20210310201637-6496\" value_size:622 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/force-systemd-env-20210310201637-6496\" > >>" with result "size:16" took too long (183.8353ms) to execute
	: html/template:* 2021-03-10 20:47:19.220495 W | etcdserver: request "header:<ID:10490704451482568358 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/force-systemd-env-20210310201637-6496\" mod_revision:843 > success:<request_put:<key:\"/registry/leases/kube-node-lease/force-systemd-env-20210310201637-6496\" value_size:622 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/force-systemd-env-20210310201637-6496\" > >>" with result "size:16" took too long (183.8353ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:48:56.508014    5808 out.go:340] unable to execute * 2021-03-10 20:47:30.826985 W | etcdserver: request "header:<ID:10490704451482568405 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:729 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3881 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (105.6115ms) to execute
	: html/template:* 2021-03-10 20:47:30.826985 W | etcdserver: request "header:<ID:10490704451482568405 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:729 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3881 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (105.6115ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:49:26.638106    5808 logs.go:183] command /bin/bash -c "docker logs --tail 25 0ae94b4c5465" failed with error: /bin/bash -c "docker logs --tail 25 0ae94b4c5465": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 0ae94b4c5465
	 output: "\n** stderr ** \nError: No such container: 0ae94b4c5465\n\n** /stderr **"
	E0310 20:49:34.425144    5808 out.go:335] unable to parse "* I0310 20:45:00.836917   12928 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:45:00.836917   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:49:34.433247    5808 out.go:335] unable to parse "* I0310 20:45:01.866558   12928 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0296417s)\n": template: * I0310 20:45:01.866558   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0296417s)
	:1: function "json" not defined - returning raw string.
	E0310 20:49:34.458007    5808 out.go:335] unable to parse "* I0310 20:45:03.020868   12928 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:45:03.020868   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:49:34.472846    5808 out.go:335] unable to parse "* I0310 20:45:04.040267   12928 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0194001s)\n": template: * I0310 20:45:04.040267   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0194001s)
	:1: function "json" not defined - returning raw string.
	E0310 20:49:34.625689    5808 out.go:340] unable to execute * I0310 20:45:04.764196   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 20:45:04.764196   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:291: executing "* I0310 20:45:04.764196   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:34.679778    5808 out.go:340] unable to execute * W0310 20:45:05.343119   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 20:45:05.343119   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:286: executing "* W0310 20:45:05.343119   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:34.756661    5808 out.go:340] unable to execute * I0310 20:45:05.948873   12928 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 20:45:05.948873   12928 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 20:45:05.948873   12928 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:34.993343    5808 out.go:335] unable to parse "* I0310 20:45:13.420956   12928 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:45:13.420956   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:49:35.248994    5808 out.go:335] unable to parse "* I0310 20:45:14.432368   12928 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0103571s)\n": template: * I0310 20:45:14.432368   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0103571s)
	:1: function "json" not defined - returning raw string.
	E0310 20:49:35.251935    5808 out.go:335] unable to parse "* I0310 20:45:14.442693   12928 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 20:45:14.442693   12928 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 20:49:35.338323    5808 out.go:335] unable to parse "* I0310 20:45:15.468738   12928 cli_runner.go:168] Completed: docker info --format \"'{{json .SecurityOptions}}'\": (1.0260466s)\n": template: * I0310 20:45:15.468738   12928 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.0260466s)
	:1: function "json" not defined - returning raw string.
	E0310 20:49:35.531270    5808 out.go:340] unable to execute * I0310 20:45:26.345166   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:26.345166   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:26.345166   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:35.554764    5808 out.go:335] unable to parse "* I0310 20:45:26.971298   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}\n": template: * I0310 20:45:26.971298   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:49:35.683955    5808 out.go:340] unable to execute * I0310 20:45:30.949618   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:30.949618   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:30.949618   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:35.702920    5808 out.go:335] unable to parse "* I0310 20:45:31.551704   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}\n": template: * I0310 20:45:31.551704   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:49:35.820871    5808 out.go:340] unable to execute * I0310 20:45:33.266434   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:33.266434   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:33.266434   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:35.928613    5808 out.go:340] unable to execute * I0310 20:45:35.296522   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:35.296522   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:35.296522   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:35.947849    5808 out.go:335] unable to parse "* I0310 20:45:35.957808   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}\n": template: * I0310 20:45:35.957808   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:49:36.011851    5808 out.go:340] unable to execute * I0310 20:45:36.620128   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:36.620128   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:36.620128   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:36.036019    5808 out.go:335] unable to parse "* I0310 20:45:37.214230   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}\n": template: * I0310 20:45:37.214230   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:49:36.478288    5808 out.go:340] unable to execute * I0310 20:45:38.155124   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:38.155124   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:38.155124   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:36.489300    5808 out.go:335] unable to parse "* I0310 20:45:38.756832   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}\n": template: * I0310 20:45:38.756832   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:49:36.928683    5808 out.go:340] unable to execute * I0310 20:45:49.766365   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:49.766365   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:49.766365   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:37.040151    5808 out.go:340] unable to execute * I0310 20:45:52.196673   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:52.196673   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:52.196673   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:37.100845    5808 out.go:340] unable to execute * I0310 20:45:53.953944   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:53.953944   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:53.953944   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:37.101108    5808 out.go:340] unable to execute * I0310 20:45:53.956958   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:53.956958   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:53.956958   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:49:37.246908    5808 out.go:340] unable to execute * I0310 20:45:58.790284   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	: template: * I0310 20:45:58.790284   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	:1:96: executing "* I0310 20:45:58.790284   12928 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" old-k8s-version-20210310204459-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	! unable to fetch logs for: storage-provisioner [0ae94b4c5465]

                                                
                                                
** /stderr **
helpers_test.go:245: failed logs error: exit status 110
helpers_test.go:171: Cleaning up "force-systemd-env-20210310201637-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20210310201637-6496

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20210310201637-6496: (37.0935085s)
--- FAIL: TestForceSystemdEnv (2020.19s)

                                                
                                    
x
+
TestErrorSpam (1702.16s)

                                                
                                                
=== RUN   TestErrorSpam
=== PAUSE TestErrorSpam

                                                
                                                

                                                
                                                
=== CONT  TestErrorSpam

                                                
                                                
=== CONT  TestErrorSpam
error_spam_test.go:64: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20210310201637-6496 -n=1 --memory=2250 --wait=false --driver=docker

                                                
                                                
=== CONT  TestErrorSpam
error_spam_test.go:64: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20210310201637-6496 -n=1 --memory=2250 --wait=false --driver=docker: (24m1.7552312s)
error_spam_test.go:79: unexpected stderr: "! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts"
error_spam_test.go:74: acceptable stderr: "! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1"
error_spam_test.go:79: unexpected stderr: "stdout:"
error_spam_test.go:79: unexpected stderr: "stderr:"
error_spam_test.go:79: unexpected stderr: "Unable to connect to the server: net/http: TLS handshake timeout"
error_spam_test.go:79: unexpected stderr: "]"
error_spam_test.go:93: minikube stdout:
* [nospam-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
- MINIKUBE_LOCATION=10722
* Using the docker driver based on user configuration
* Starting control plane node nospam-20210310201637-6496 in cluster nospam-20210310201637-6496
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v4
* Enabled addons: default-storageclass
* Done! kubectl is now configured to use "nospam-20210310201637-6496" cluster and "default" namespace by default
error_spam_test.go:94: minikube stderr:
! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:

                                                
                                                
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
]
error_spam_test.go:107: *** TestErrorSpam FAILED at 2021-03-10 20:40:39.3635127 +0000 GMT m=+5779.065230001
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestErrorSpam]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect nospam-20210310201637-6496
helpers_test.go:231: (dbg) docker inspect nospam-20210310201637-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b36dcfddebf967fe9b442bc1ea3ff7e8a4e84608ce7469f44d867ab81a3920f9",
	        "Created": "2021-03-10T20:16:56.369959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 123906,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:16:58.8916776Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/b36dcfddebf967fe9b442bc1ea3ff7e8a4e84608ce7469f44d867ab81a3920f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b36dcfddebf967fe9b442bc1ea3ff7e8a4e84608ce7469f44d867ab81a3920f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/b36dcfddebf967fe9b442bc1ea3ff7e8a4e84608ce7469f44d867ab81a3920f9/hosts",
	        "LogPath": "/var/lib/docker/containers/b36dcfddebf967fe9b442bc1ea3ff7e8a4e84608ce7469f44d867ab81a3920f9/b36dcfddebf967fe9b442bc1ea3ff7e8a4e84608ce7469f44d867ab81a3920f9-json.log",
	        "Name": "/nospam-20210310201637-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "nospam-20210310201637-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2359296000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2359296000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e5f5b4d9a74f0227ea94d3610c5077b7ec8431260b30cb8063d990a1ea99c512-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5f5b4d9a74f0227ea94d3610c5077b7ec8431260b30cb8063d990a1ea99c512/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5f5b4d9a74f0227ea94d3610c5077b7ec8431260b30cb8063d990a1ea99c512/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5f5b4d9a74f0227ea94d3610c5077b7ec8431260b30cb8063d990a1ea99c512/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "nospam-20210310201637-6496",
	                "Source": "/var/lib/docker/volumes/nospam-20210310201637-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "nospam-20210310201637-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "nospam-20210310201637-6496",
	                "name.minikube.sigs.k8s.io": "nospam-20210310201637-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b912d3c3d497577c4e40e3daedcc841003877b21e395897f3a9a602ce64bdcc3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55091"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55084"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b912d3c3d497",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "ad44b830d1394fd57705770d5da490891d175413bd5919e77760d152a17561fa",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "ad44b830d1394fd57705770d5da490891d175413bd5919e77760d152a17561fa",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p nospam-20210310201637-6496 -n nospam-20210310201637-6496
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p nospam-20210310201637-6496 -n nospam-20210310201637-6496: (1m2.4548644s)
helpers_test.go:240: <<< TestErrorSpam FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestErrorSpam]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20210310201637-6496 logs -n 25
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20210310201637-6496 logs -n 25: (2m43.2295332s)
helpers_test.go:248: TestErrorSpam logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:17:03 UTC, end at Wed 2021-03-10 20:42:59 UTC. --
	* Mar 10 20:24:43 nospam-20210310201637-6496 dockerd[469]: time="2021-03-10T20:24:43.585423500Z" level=info msg="Daemon shutdown complete"
	* Mar 10 20:24:43 nospam-20210310201637-6496 systemd[1]: docker.service: Succeeded.
	* Mar 10 20:24:43 nospam-20210310201637-6496 systemd[1]: Stopped Docker Application Container Engine.
	* Mar 10 20:24:43 nospam-20210310201637-6496 systemd[1]: Starting Docker Application Container Engine...
	* Mar 10 20:24:43 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:43.839603500Z" level=info msg="Starting up"
	* Mar 10 20:24:44 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:44.211480000Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:24:44 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:44.211592500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:24:44 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:44.211646200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:24:44 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:44.211676400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:24:44 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:44.233613900Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:24:44 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:44.233721600Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:24:44 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:44.233763200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:24:44 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:44.233784400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:24:54 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:54.842837000Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 20:24:54 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:54.975568500Z" level=info msg="Loading containers: start."
	* Mar 10 20:24:56 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:56.640994600Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 20:24:57 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:57.402166200Z" level=info msg="Loading containers: done."
	* Mar 10 20:24:57 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:57.691355200Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 20:24:57 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:57.694470100Z" level=info msg="Daemon has completed initialization"
	* Mar 10 20:24:57 nospam-20210310201637-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 20:24:58 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:58.243713400Z" level=info msg="API listen on [::]:2376"
	* Mar 10 20:24:58 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:24:58.394167300Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 20:28:42 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:28:42.006852000Z" level=info msg="ignoring event" container=2f93a5f201c01e229ad65deec372f8d7e99aa0336597369ba0d67c84d1842acb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:29:45 nospam-20210310201637-6496 dockerd[744]: time="2021-03-10T20:29:45.574788500Z" level=error msg="Handler for GET /v1.40/containers/1b46b1a66c7fa5d3c7fcf3765ae3cb22c4307d1eb28afee5799a856e7ad544bb/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	* Mar 10 20:29:45 nospam-20210310201637-6496 dockerd[744]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* bdf0740c57ddb       bfe3a36ebd252       8 minutes ago       Running             coredns                   0                   f0c9f97f7cc83
	* 1b8135ee4dd84       43154ddb57a83       10 minutes ago      Running             kube-proxy                0                   8454f2a23588f
	* 1b46b1a66c7fa       a27166429d98e       13 minutes ago      Running             kube-controller-manager   1                   1e5ddf57ff444
	* 58f19e1557579       ed2c44fbdd78b       16 minutes ago      Running             kube-scheduler            0                   310b7554e9c6d
	* 2f93a5f201c01       a27166429d98e       16 minutes ago      Exited              kube-controller-manager   0                   1e5ddf57ff444
	* a39f9e950f650       0369cf4303ffd       16 minutes ago      Running             etcd                      0                   3281420e87541
	* 370102cd67063       a8c2fdb8bf76e       16 minutes ago      Running             kube-apiserver            0                   c27b2707b8af1
	* 
	* ==> coredns [bdf0740c57dd] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               nospam-20210310201637-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=nospam-20210310201637-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=nospam-20210310201637-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T20_29_17_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 20:28:35 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  nospam-20210310201637-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 20:43:27 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 20:42:57 +0000   Wed, 10 Mar 2021 20:42:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 20:42:57 +0000   Wed, 10 Mar 2021 20:42:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 20:42:57 +0000   Wed, 10 Mar 2021 20:42:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 20:42:57 +0000   Wed, 10 Mar 2021 20:42:57 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.3
	*   Hostname:    nospam-20210310201637-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                d6681193-47aa-4c7f-9e19-bd161f6ce73a
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (6 in total)
	*   Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-74ff55c5b-mrl7v                               100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	*   kube-system                 etcd-nospam-20210310201637-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	*   kube-system                 kube-apiserver-nospam-20210310201637-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         11m
	*   kube-system                 kube-controller-manager-nospam-20210310201637-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	*   kube-system                 kube-proxy-g5b25                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	*   kube-system                 kube-scheduler-nospam-20210310201637-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                From        Message
	*   ----    ------                   ----               ----        -------
	*   Normal  Starting                 13m                kubelet     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 9m59s              kube-proxy  Starting kube-proxy.
	*   Normal  NodeNotReady             80s (x2 over 12m)  kubelet     Node nospam-20210310201637-6496 status is now: NodeNotReady
	*   Normal  NodeHasNoDiskPressure    41s (x2 over 12m)  kubelet     Node nospam-20210310201637-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     41s (x2 over 12m)  kubelet     Node nospam-20210310201637-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeReady                41s (x2 over 11m)  kubelet     Node nospam-20210310201637-6496 status is now: NodeReady
	*   Normal  NodeHasSufficientMemory  41s (x2 over 12m)  kubelet     Node nospam-20210310201637-6496 status is now: NodeHasSufficientMemory
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [a39f9e950f65] <==
	* 2021-03-10 20:42:26.838852 W | etcdserver: request "header:<ID:12691275819406794057 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:520 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3866 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>" with result "size:16" took too long (520.0428ms) to execute
	* 2021-03-10 20:42:29.125416 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:42:38.442497 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:42:41.285697 W | etcdserver: read-only range request "key:\"/registry/prioritylevelconfigurations/catch-all\" " with result "range_response_count:1 size:484" took too long (164.9655ms) to execute
	* 2021-03-10 20:42:42.754385 W | etcdserver: read-only range request "key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true " with result "range_response_count:0 size:7" took too long (643.8896ms) to execute
	* 2021-03-10 20:42:47.355467 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:42:49.950160 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (131.0158ms) to execute
	* 2021-03-10 20:42:51.140045 W | etcdserver: request "header:<ID:12691275819406794145 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-74ff55c5b-mrl7v.166b1581c9e59840\" mod_revision:637 > success:<request_put:<key:\"/registry/events/kube-system/coredns-74ff55c5b-mrl7v.166b1581c9e59840\" value_size:806 lease:3467903782552018226 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-74ff55c5b-mrl7v.166b1581c9e59840\" > >>" with result "size:16" took too long (126.0661ms) to execute
	* 2021-03-10 20:43:00.815936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:43:05.089312 W | etcdserver: request "header:<ID:12691275819406794212 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-nospam-20210310201637-6496.166b15dc332a1240\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-nospam-20210310201637-6496.166b15dc332a1240\" value_size:778 lease:3467903782552018226 >> failure:<>>" with result "size:16" took too long (165.937ms) to execute
	* 2021-03-10 20:43:05.415047 W | etcdserver: request "header:<ID:12691275819406794213 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/nospam-20210310201637-6496\" mod_revision:645 > success:<request_put:<key:\"/registry/leases/kube-node-lease/nospam-20210310201637-6496\" value_size:590 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/nospam-20210310201637-6496\" > >>" with result "size:16" took too long (110.3705ms) to execute
	* 2021-03-10 20:43:05.657557 W | etcdserver: request "header:<ID:12691275819406794214 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/nospam-20210310201637-6496\" mod_revision:657 > success:<request_put:<key:\"/registry/minions/nospam-20210310201637-6496\" value_size:5688 >> failure:<request_range:<key:\"/registry/minions/nospam-20210310201637-6496\" > >>" with result "size:16" took too long (219.5276ms) to execute
	* 2021-03-10 20:43:05.741602 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-nospam-20210310201637-6496\" " with result "range_response_count:1 size:7222" took too long (500.0405ms) to execute
	* 2021-03-10 20:43:05.808046 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true " with result "range_response_count:0 size:5" took too long (234.7574ms) to execute
	* 2021-03-10 20:43:05.808479 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210\" " with result "range_response_count:1 size:928" took too long (234.6747ms) to execute
	* 2021-03-10 20:43:05.812192 W | etcdserver: read-only range request "key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true " with result "range_response_count:0 size:5" took too long (314.6499ms) to execute
	* 2021-03-10 20:43:05.812485 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (299.7921ms) to execute
	* 2021-03-10 20:43:06.079449 W | etcdserver: request "header:<ID:12691275819406794218 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210\" mod_revision:652 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210\" value_size:799 lease:3467903782552018226 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210\" > >>" with result "size:16" took too long (108.6118ms) to execute
	* 2021-03-10 20:43:06.383939 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:43:16.312249 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:43:27.174153 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:43:29.148123 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (126.7048ms) to execute
	* 2021-03-10 20:43:33.554807 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-nospam-20210310201637-6496\" " with result "range_response_count:1 size:4079" took too long (110.9438ms) to execute
	* 2021-03-10 20:43:37.797718 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:43:46.487785 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  20:43:49 up  1:44,  0 users,  load average: 157.17, 162.75, 121.65
	* Linux nospam-20210310201637-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [370102cd6706] <==
	* I0310 20:43:05.918672       1 trace.go:205] Trace[1725966830]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-nospam-20210310201637-6496,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:172.17.0.3 (10-Mar-2021 20:43:05.009) (total time: 909ms):
	* Trace[1725966830]: ---"About to write a response" 889ms (20:43:00.898)
	* Trace[1725966830]: [909.294ms] [909.294ms] END
	* I0310 20:43:06.083277       1 trace.go:205] Trace[354708370]: "GuaranteedUpdate etcd3" type:*core.Event (10-Mar-2021 20:43:05.437) (total time: 646ms):
	* Trace[354708370]: ---"initial value restored" 395ms (20:43:00.832)
	* Trace[354708370]: ---"Transaction committed" 207ms (20:43:00.083)
	* Trace[354708370]: [646.0835ms] [646.0835ms] END
	* I0310 20:43:06.083541       1 trace.go:205] Trace[69708133]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:172.17.0.3 (10-Mar-2021 20:43:05.437) (total time: 646ms):
	* Trace[69708133]: ---"About to apply patch" 395ms (20:43:00.832)
	* Trace[69708133]: ---"Object stored in database" 208ms (20:43:00.083)
	* Trace[69708133]: [646.4931ms] [646.4931ms] END
	* I0310 20:43:23.477874       1 client.go:360] parsed scheme: "passthrough"
	* I0310 20:43:23.477955       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 20:43:23.477969       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 20:43:28.749817       1 trace.go:205] Trace[1327726052]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/nospam-20210310201637-6496,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:172.17.0.3 (10-Mar-2021 20:43:27.784) (total time: 964ms):
	* Trace[1327726052]: ---"About to convert to expected version" 554ms (20:43:00.339)
	* Trace[1327726052]: ---"Object stored in database" 409ms (20:43:00.749)
	* Trace[1327726052]: [964.8087ms] [964.8087ms] END
	* I0310 20:43:31.885566       1 trace.go:205] Trace[1064501503]: "List" url:/api/v1/pods,user-agent:kubectl/v1.20.2 (linux/amd64) kubernetes/faecb19,client:127.0.0.1 (10-Mar-2021 20:43:31.304) (total time: 580ms):
	* Trace[1064501503]: ---"Listing from storage done" 207ms (20:43:00.512)
	* Trace[1064501503]: ---"Writing http response done" count:6 373ms (20:43:00.885)
	* Trace[1064501503]: [580.4117ms] [580.4117ms] END
	* I0310 20:43:53.123919       1 trace.go:205] Trace[1898546688]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.20.2 (linux/amd64) kubernetes/faecb19/system:serviceaccount:kube-system:cronjob-controller,client:172.17.0.3 (10-Mar-2021 20:43:52.606) (total time: 517ms):
	* Trace[1898546688]: ---"Listing from storage done" 485ms (20:43:00.110)
	* Trace[1898546688]: [517.6948ms] [517.6948ms] END
	* 
	* ==> kube-controller-manager [1b46b1a66c7f] <==
	* I0310 20:30:50.219252       1 shared_informer.go:247] Caches are synced for stateful set 
	* I0310 20:30:50.219376       1 shared_informer.go:247] Caches are synced for endpoint 
	* I0310 20:30:51.084941       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 20:30:51.394672       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 20:30:52.208588       1 range_allocator.go:373] Set node nospam-20210310201637-6496 PodCIDR to [10.244.0.0/24]
	* I0310 20:30:52.255177       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	* I0310 20:30:53.281926       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 20:30:54.582089       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 20:30:54.596254       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 20:30:54.596278       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 20:30:56.914468       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
	* I0310 20:31:02.388505       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-g5b25"
	* I0310 20:31:03.730756       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-mrl7v"
	* I0310 20:31:14.776899       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 20:32:05.173749       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* I0310 20:42:17.961913       1 event.go:291] "Event occurred" object="nospam-20210310201637-6496" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node nospam-20210310201637-6496 status is now: NodeNotReady"
	* I0310 20:42:18.947421       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-nospam-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:20.005388       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-nospam-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:20.534432       1 event.go:291] "Event occurred" object="kube-system/etcd-nospam-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:21.267808       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-nospam-20210310201637-6496" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:22.816812       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-g5b25" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:42:24.034375       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 20:42:24.034665       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-mrl7v" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 20:43:05.726914       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-mrl7v" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-mrl7v"
	* I0310 20:43:05.816641       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-controller-manager [2f93a5f201c0] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xb1
	* crypto/tls.(*Conn).readFromUntil(0xc000c29180, 0x4da5040, 0xc000594ca0, 0x5, 0xc000594ca0, 0x99)
	* 	/usr/local/go/src/crypto/tls/conn.go:801 +0xf3
	* crypto/tls.(*Conn).readRecordOrCCS(0xc000c29180, 0x0, 0x0, 0xc000d07d18)
	* 	/usr/local/go/src/crypto/tls/conn.go:608 +0x115
	* crypto/tls.(*Conn).readRecord(...)
	* 	/usr/local/go/src/crypto/tls/conn.go:576
	* crypto/tls.(*Conn).Read(0xc000c29180, 0xc000ce4000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	* 	/usr/local/go/src/crypto/tls/conn.go:1252 +0x15f
	* bufio.(*Reader).Read(0xc000683680, 0xc0006fc9d8, 0x9, 0x9, 0xc000d07d18, 0x4905800, 0x9b77ab)
	* 	/usr/local/go/src/bufio/bufio.go:227 +0x222
	* io.ReadAtLeast(0x4d9eba0, 0xc000683680, 0xc0006fc9d8, 0x9, 0x9, 0x9, 0xc000116040, 0x0, 0x4d9efe0)
	* 	/usr/local/go/src/io/io.go:314 +0x87
	* io.ReadFull(...)
	* 	/usr/local/go/src/io/io.go:333
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc0006fc9d8, 0x9, 0x9, 0x4d9eba0, 0xc000683680, 0x0, 0x0, 0xc000d07dd0, 0x46d045)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0006fc9a0, 0xc000cc91a0, 0x0, 0x0, 0x0)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000d07fa8, 0x0, 0x0)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1819 +0xd8
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0007d0900)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1741 +0x6f
	* created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5
	* 
	* ==> kube-proxy [1b8135ee4dd8] <==
	* I0310 20:33:38.276552       1 node.go:172] Successfully retrieved node IP: 172.17.0.3
	* I0310 20:33:38.282160       1 server_others.go:142] kube-proxy node IP is an IPv4 address (172.17.0.3), assume IPv4 operation
	* W0310 20:33:39.460572       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 20:33:39.460975       1 server_others.go:185] Using iptables Proxier.
	* I0310 20:33:39.461591       1 server.go:650] Version: v1.20.2
	* I0310 20:33:39.479872       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 20:33:39.496279       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 20:33:39.525120       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 20:33:39.526043       1 config.go:315] Starting service config controller
	* I0310 20:33:39.526086       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 20:33:39.526460       1 config.go:224] Starting endpoint slice config controller
	* I0310 20:33:39.526477       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 20:33:39.926005       1 shared_informer.go:247] Caches are synced for service config 
	* I0310 20:33:39.926451       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 20:34:15.175937       1 trace.go:205] Trace[1199853592]: "iptables Monitor CANARY check" (10-Mar-2021 20:34:12.019) (total time: 3153ms):
	* Trace[1199853592]: [3.1530992s] [3.1530992s] END
	* I0310 20:34:47.006148       1 trace.go:205] Trace[1632583085]: "iptables Monitor CANARY check" (10-Mar-2021 20:34:42.053) (total time: 4952ms):
	* Trace[1632583085]: [4.9520287s] [4.9520287s] END
	* I0310 20:37:14.115825       1 trace.go:205] Trace[616316265]: "iptables Monitor CANARY check" (10-Mar-2021 20:37:12.076) (total time: 2027ms):
	* Trace[616316265]: [2.0278954s] [2.0278954s] END
	* I0310 20:40:44.081057       1 trace.go:205] Trace[913762048]: "iptables Monitor CANARY check" (10-Mar-2021 20:40:42.028) (total time: 2016ms):
	* Trace[913762048]: [2.0166699s] [2.0166699s] END
	* I0310 20:41:48.786358       1 trace.go:205] Trace[637538074]: "iptables Monitor CANARY check" (10-Mar-2021 20:41:45.504) (total time: 3280ms):
	* Trace[637538074]: [3.2804023s] [3.2804023s] END
	* 
	* ==> kube-scheduler [58f19e155757] <==
	* E0310 20:28:38.041296       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:28:38.168187       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:28:38.334051       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:28:38.655224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 20:28:38.667535       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:28:38.668556       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:28:38.987248       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:28:39.172953       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:28:39.248293       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:28:39.357792       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:28:39.750929       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:28:41.507235       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 20:28:41.713782       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:28:43.090938       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:28:43.170779       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:28:43.171204       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 20:28:43.556744       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:28:43.686491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:28:43.744045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:28:43.848581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:28:44.483371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:28:44.498600       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:28:45.647075       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:28:53.656921       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I0310 20:29:08.551748       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:17:03 UTC, end at Wed 2021-03-10 20:44:20 UTC. --
	* Mar 10 20:41:34 nospam-20210310201637-6496 kubelet[3232]: W0310 20:41:34.014556    3232 status_manager.go:550] Failed to get status for pod "coredns-74ff55c5b-mrl7v_kube-system(a98207ee-4e4d-4810-a0a8-fed682d2758b)": unexpected error when reading response body. Please retry. Original error: read tcp 172.17.0.3:35818->172.17.0.3:8443: use of closed network connection
	* Mar 10 20:41:51 nospam-20210310201637-6496 kubelet[3232]: E0310 20:41:51.553846    3232 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"coredns-74ff55c5b-mrl7v.166b1581c9e59840", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"596", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"coredns-74ff55c5b-mrl7v", UID:"a98207ee-4e4d-4810-a0a8-fed682d2758b", APIVersion:"v1", ResourceVersion:"412", FieldPath:"spec.containers{coredns}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"http://172.1
8.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)", Source:v1.EventSource{Component:"kubelet", Host:"nospam-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751005338, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6999dba12c6c, ext:716073903801, loc:(*time.Location)(0x70d1080)}}, Count:7, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/coredns-74ff55c5b-mrl7v.166b1581c9e59840": net/http: TLS handshake timeout'(may retry after sleeping)
	* Mar 10 20:41:56 nospam-20210310201637-6496 kubelet[3232]: I0310 20:41:51.703793    3232 trace.go:205] Trace[1918347473]: "iptables Monitor CANARY check" (10-Mar-2021 20:41:38.528) (total time: 13173ms):
	* Mar 10 20:41:56 nospam-20210310201637-6496 kubelet[3232]: Trace[1918347473]: [13.1730958s] [13.1730958s] END
	* Mar 10 20:41:56 nospam-20210310201637-6496 kubelet[3232]: E0310 20:41:54.387410    3232 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/nospam-20210310201637-6496?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	* Mar 10 20:42:04 nospam-20210310201637-6496 kubelet[3232]: W0310 20:42:04.609609    3232 status_manager.go:550] Failed to get status for pod "coredns-74ff55c5b-mrl7v_kube-system(a98207ee-4e4d-4810-a0a8-fed682d2758b)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-mrl7v": net/http: TLS handshake timeout
	* Mar 10 20:42:12 nospam-20210310201637-6496 kubelet[3232]: E0310 20:42:10.603419    3232 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/nospam-20210310201637-6496?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	* Mar 10 20:42:12 nospam-20210310201637-6496 kubelet[3232]: W0310 20:42:10.668155    3232 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 20:42:12 nospam-20210310201637-6496 kubelet[3232]: W0310 20:42:12.492335    3232 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 20:42:15 nospam-20210310201637-6496 kubelet[3232]: E0310 20:42:13.805509    3232 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"coredns-74ff55c5b-mrl7v.166b1581c9e59840", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"596", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"coredns-74ff55c5b-mrl7v", UID:"a98207ee-4e4d-4810-a0a8-fed682d2758b", APIVersion:"v1", ResourceVersion:"412", FieldPath:"spec.containers{coredns}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"http://172.1
8.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)", Source:v1.EventSource{Component:"kubelet", Host:"nospam-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751005338, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6999dba12c6c, ext:716073903801, loc:(*time.Location)(0x70d1080)}}, Count:7, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/coredns-74ff55c5b-mrl7v.166b1581c9e59840": http2: client connection force closed via ClientConn.Close'(may retry after sleeping)
	* Mar 10 20:42:17 nospam-20210310201637-6496 kubelet[3232]: E0310 20:42:14.701765    3232 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dnospam-20210310201637-6496&resourceVersion=580&timeoutSeconds=363&watch=true": http2: client connection force closed via ClientConn.Close
	* Mar 10 20:42:19 nospam-20210310201637-6496 kubelet[3232]: I0310 20:42:18.700700    3232 setters.go:577] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2021-03-10 20:42:18.6736616 +0000 UTC m=+783.284335801 LastTransitionTime:2021-03-10 20:42:18.6736616 +0000 UTC m=+783.284335801 Reason:KubeletNotReady Message:container runtime is down}
	* Mar 10 20:42:24 nospam-20210310201637-6496 kubelet[3232]: E0310 20:42:24.663914    3232 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/nospam-20210310201637-6496?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	* Mar 10 20:42:29 nospam-20210310201637-6496 kubelet[3232]: I0310 20:42:29.814625    3232 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease
	* Mar 10 20:42:30 nospam-20210310201637-6496 kubelet[3232]: E0310 20:42:29.880805    3232 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"coredns-74ff55c5b-mrl7v.166b1581c9e59840", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"596", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"coredns-74ff55c5b-mrl7v", UID:"a98207ee-4e4d-4810-a0a8-fed682d2758b", APIVersion:"v1", ResourceVersion:"412", FieldPath:"spec.containers{coredns}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"http://172.1
8.0.2:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)", Source:v1.EventSource{Component:"kubelet", Host:"nospam-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751005338, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6999dba12c6c, ext:716073903801, loc:(*time.Location)(0x70d1080)}}, Count:7, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Patch "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/coredns-74ff55c5b-mrl7v.166b1581c9e59840": read tcp 172.17.0.3:34190->172.17.0.3:8443: use of closed network connection'(may retry after sleeping)
	* Mar 10 20:42:32 nospam-20210310201637-6496 kubelet[3232]: E0310 20:42:30.076803    3232 kubelet_node_status.go:447] Error updating node status, will retry: failed to patch status "{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"MemoryPressure\"},{\"type\":\"DiskPressure\"},{\"type\":\"PIDPressure\"},{\"type\":\"Ready\"}],\"conditions\":[{\"lastHeartbeatTime\":\"2021-03-10T20:42:18Z\",\"type\":\"MemoryPressure\"},{\"lastHeartbeatTime\":\"2021-03-10T20:42:18Z\",\"type\":\"DiskPressure\"},{\"lastHeartbeatTime\":\"2021-03-10T20:42:18Z\",\"type\":\"PIDPressure\"},{\"lastHeartbeatTime\":\"2021-03-10T20:42:18Z\",\"lastTransitionTime\":\"2021-03-10T20:42:18Z\",\"message\":\"container runtime is down\",\"reason\":\"KubeletNotReady\",\"status\":\"False\",\"type\":\"Ready\"}]}}" for node "nospam-20210310201637-6496": Patch "https://control-plane.minikube.internal:8443/api/v1/nodes/nospam-20210310201637-6496/status?timeout=10s": write tcp 172.17.0.3:34152->172.17.0.3:8443: use of closed network conne
ction
	* Mar 10 20:42:32 nospam-20210310201637-6496 kubelet[3232]: W0310 20:42:30.078260    3232 reflector.go:436] object-"kube-system"/"kube-proxy-token-bt8nz": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"kube-proxy-token-bt8nz": Unexpected watch close - watch lasted less than a second and no items received
	* Mar 10 20:42:32 nospam-20210310201637-6496 kubelet[3232]: W0310 20:42:30.185880    3232 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
	* Mar 10 20:42:32 nospam-20210310201637-6496 kubelet[3232]: W0310 20:42:30.191831    3232 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
	* Mar 10 20:42:32 nospam-20210310201637-6496 kubelet[3232]: W0310 20:42:30.192163    3232 reflector.go:436] object-"kube-system"/"coredns-token-pfcpm": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"coredns-token-pfcpm": Unexpected watch close - watch lasted less than a second and no items received
	* Mar 10 20:42:32 nospam-20210310201637-6496 kubelet[3232]: W0310 20:42:30.192358    3232 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
	* Mar 10 20:42:32 nospam-20210310201637-6496 kubelet[3232]: W0310 20:42:30.192519    3232 reflector.go:436] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"coredns": Unexpected watch close - watch lasted less than a second and no items received
	* Mar 10 20:42:39 nospam-20210310201637-6496 kubelet[3232]: I0310 20:42:39.940474    3232 trace.go:205] Trace[756382007]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:438 (10-Mar-2021 20:42:20.635) (total time: 19303ms):
	* Mar 10 20:42:39 nospam-20210310201637-6496 kubelet[3232]: Trace[756382007]: ---"Objects listed" 19303ms (20:42:00.940)
	* Mar 10 20:42:39 nospam-20210310201637-6496 kubelet[3232]: Trace[756382007]: [19.3035079s] [19.3035079s] END
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                   |                 Profile                  |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:52:44 GMT | Wed, 10 Mar 2021 19:53:02 GMT |
	|         | stop                                     |                                          |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:59:04 GMT | Wed, 10 Mar 2021 19:59:20 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| start   | -p                                       | multinode-20210310194323-6496-m03        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:59:27 GMT | Wed, 10 Mar 2021 20:02:27 GMT |
	|         | multinode-20210310194323-6496-m03        |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	| delete  | -p                                       | multinode-20210310194323-6496-m03        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:02:30 GMT | Wed, 10 Mar 2021 20:02:41 GMT |
	|         | multinode-20210310194323-6496-m03        |                                          |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496            | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:02:45 GMT | Wed, 10 Mar 2021 20:02:59 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | multinode-20210310194323-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:03:05 GMT | Wed, 10 Mar 2021 20:03:22 GMT |
	|         | multinode-20210310194323-6496            |                                          |                         |         |                               |                               |
	| start   | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:03:23 GMT | Wed, 10 Mar 2021 20:06:49 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |                         |         |                               |                               |
	|         | --wait=true --preload=false              |                                          |                         |         |                               |                               |
	|         | --driver=docker                          |                                          |                         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0             |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:06:50 GMT | Wed, 10 Mar 2021 20:06:54 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | -- docker pull busybox                   |                                          |                         |         |                               |                               |
	| start   | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:06:54 GMT | Wed, 10 Mar 2021 20:08:51 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr          |                                          |                         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker         |                                          |                         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3             |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:08:51 GMT | Wed, 10 Mar 2021 20:08:54 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	|         | -- docker images                         |                                          |                         |         |                               |                               |
	| delete  | -p                                       | test-preload-20210310200323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:08:54 GMT | Wed, 10 Mar 2021 20:09:05 GMT |
	|         | test-preload-20210310200323-6496         |                                          |                         |         |                               |                               |
	| start   | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:09:06 GMT | Wed, 10 Mar 2021 20:11:51 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --memory=1900 --driver=docker            |                                          |                         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:52 GMT | Wed, 10 Mar 2021 20:11:54 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --schedule 5m                            |                                          |                         |         |                               |                               |
	| ssh     | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:57 GMT | Wed, 10 Mar 2021 20:11:59 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | -- sudo systemctl show                   |                                          |                         |         |                               |                               |
	|         | minikube-scheduled-stop --no-page        |                                          |                         |         |                               |                               |
	| stop    | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:00 GMT | Wed, 10 Mar 2021 20:12:02 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	|         | --schedule 5s                            |                                          |                         |         |                               |                               |
	| delete  | -p                                       | scheduled-stop-20210310200905-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:26 GMT | Wed, 10 Mar 2021 20:12:35 GMT |
	|         | scheduled-stop-20210310200905-6496       |                                          |                         |         |                               |                               |
	| start   | -p                                       | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:37 GMT | Wed, 10 Mar 2021 20:15:24 GMT |
	|         | skaffold-20210310201235-6496             |                                          |                         |         |                               |                               |
	|         | --memory=2600 --driver=docker            |                                          |                         |         |                               |                               |
	| -p      | skaffold-20210310201235-6496             | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:15:28 GMT | Wed, 10 Mar 2021 20:15:41 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | skaffold-20210310201235-6496             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:15:46 GMT | Wed, 10 Mar 2021 20:15:57 GMT |
	|         | skaffold-20210310201235-6496             |                                          |                         |         |                               |                               |
	| delete  | -p                                       | insufficient-storage-20210310201557-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:29 GMT | Wed, 10 Mar 2021 20:16:37 GMT |
	|         | insufficient-storage-20210310201557-6496 |                                          |                         |         |                               |                               |
	| delete  | -p pause-20210310201637-6496             | pause-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:24 GMT | Wed, 10 Mar 2021 20:32:49 GMT |
	| -p      | offline-docker-20210310201637-6496       | offline-docker-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:04 GMT | Wed, 10 Mar 2021 20:33:57 GMT |
	|         | logs -n 25                               |                                          |                         |         |                               |                               |
	| delete  | -p                                       | offline-docker-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:34:20 GMT | Wed, 10 Mar 2021 20:34:47 GMT |
	|         | offline-docker-20210310201637-6496       |                                          |                         |         |                               |                               |
	| stop    | -p                                       | kubernetes-upgrade-20210310201637-6496   | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:39:52 GMT | Wed, 10 Mar 2021 20:40:10 GMT |
	|         | kubernetes-upgrade-20210310201637-6496   |                                          |                         |         |                               |                               |
	| start   | -p nospam-20210310201637-6496            | nospam-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:38 GMT | Wed, 10 Mar 2021 20:40:39 GMT |
	|         | -n=1 --memory=2250                       |                                          |                         |         |                               |                               |
	|         | --wait=false --driver=docker             |                                          |                         |         |                               |                               |
	|---------|------------------------------------------|------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 20:40:11
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 20:40:11.626105    9740 out.go:239] Setting OutFile to fd 1936 ...
	* I0310 20:40:11.628082    9740 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 20:40:11.628082    9740 out.go:252] Setting ErrFile to fd 1912...
	* I0310 20:40:11.628082    9740 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 20:40:11.646812    9740 out.go:246] Setting JSON to false
	* I0310 20:40:11.655809    9740 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":34277,"bootTime":1615374534,"procs":121,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 20:40:11.655809    9740 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 20:40:11.660981    9740 out.go:129] * [kubernetes-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 20:40:11.836483    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: (22.7258798s)
	* I0310 20:40:11.836483    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 from cache
	* I0310 20:40:11.836483    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	* I0310 20:40:11.848418    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	* I0310 20:40:11.673974    9740 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 20:40:11.677848    9740 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 20:40:12.222399    9740 docker.go:119] docker version: linux-20.10.2
	* I0310 20:40:12.250130    9740 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:40:13.269960    9740 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0193702s)
	* I0310 20:40:13.272651    9740 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:121 OomKillDisable:true NGoroutines:94 SystemTime:2021-03-10 20:40:12.8008788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:40:13.279563    9740 out.go:129] * Using the docker driver based on existing profile
	* I0310 20:40:13.279563    9740 start.go:276] selected driver: docker
	* I0310 20:40:13.279563    9740 start.go:718] validating driver "docker" against &{Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 20:40:13.280195    9740 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 20:40:15.341033    9740 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:40:16.550107    9740 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.209076s)
	* I0310 20:40:16.551105    9740 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:121 OomKillDisable:true NGoroutines:98 SystemTime:2021-03-10 20:40:15.8935738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:40:16.552517    9740 start_flags.go:398] config:
	* {Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 20:40:16.562672    9740 out.go:129] * Starting control plane node kubernetes-upgrade-20210310201637-6496 in cluster kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:17.781711    9740 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 20:40:17.781947    9740 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 20:40:17.781947    9740 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	* I0310 20:40:17.782420    9740 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4
	* I0310 20:40:17.782420    9740 cache.go:54] Caching tarball of preloaded images
	* I0310 20:40:17.782698    9740 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 20:40:17.782843    9740 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.5-rc.0 on docker
	* I0310 20:40:17.783357    9740 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\config.json ...
	* I0310 20:40:17.817163    9740 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 20:40:17.817793    9740 start.go:313] acquiring machines lock for kubernetes-upgrade-20210310201637-6496: {Name:mkf139d86564eb552ba6ebdc1acdb4bdc8579ad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:40:17.818641    9740 start.go:317] acquired machines lock for "kubernetes-upgrade-20210310201637-6496" in 847.7??s
	* I0310 20:40:17.818854    9740 start.go:93] Skipping create...Using existing machine configuration
	* I0310 20:40:17.818854    9740 fix.go:55] fixHost starting: 
	* I0310 20:40:17.841819    9740 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310201637-6496 --format=
	* I0310 20:40:18.441624    9740 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210310201637-6496: state=Stopped err=<nil>
	* W0310 20:40:18.442825    9740 fix.go:134] unexpected machine state, will restart: <nil>
	* I0310 20:40:18.452559    9740 out.go:129] * Restarting existing docker container for "kubernetes-upgrade-20210310201637-6496" ...
	* I0310 20:40:18.461509    9740 cli_runner.go:115] Run: docker start kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:22.058935    9740 cli_runner.go:168] Completed: docker start kubernetes-upgrade-20210310201637-6496: (3.5974319s)
	* I0310 20:40:22.067089    9740 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310201637-6496 --format=
	* I0310 20:40:22.636962    9740 kic.go:410] container "kubernetes-upgrade-20210310201637-6496" state is running.
	* I0310 20:40:22.654967    9740 cli_runner.go:115] Run: docker container inspect -f "" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:23.276828    9740 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\config.json ...
	* I0310 20:40:23.283766    9740 machine.go:88] provisioning docker machine ...
	* I0310 20:40:23.283947    9740 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20210310201637-6496"
	* I0310 20:40:23.291722    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:23.888238    9740 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:40:23.889382    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	* I0310 20:40:23.889520    9740 main.go:121] libmachine: About to run SSH command:
	* sudo hostname kubernetes-upgrade-20210310201637-6496 && echo "kubernetes-upgrade-20210310201637-6496" | sudo tee /etc/hostname
	* I0310 20:40:23.910986    9740 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 20:40:24.083407    7808 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: (17.2588891s)
	* I0310 20:40:24.083407    7808 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 from cache
	* I0310 20:40:24.083407    7808 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:40:24.091424    7808 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:40:26.922831    9740 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 20:40:31.260862    9740 main.go:121] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210310201637-6496
	* 
	* I0310 20:40:30.039752    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (34.1338864s)
	* I0310 20:40:30.040190    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 from cache
	* I0310 20:40:30.040190    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	* I0310 20:40:30.053070    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	* I0310 20:40:31.285524    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:31.949152    9740 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:40:31.950149    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	* I0310 20:40:31.950149    9740 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\skubernetes-upgrade-20210310201637-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210310201637-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 kubernetes-upgrade-20210310201637-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 20:40:33.182035    9740 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 20:40:33.182228    9740 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 20:40:33.182514    9740 ubuntu.go:177] setting up certificates
	* I0310 20:40:33.182743    9740 provision.go:83] configureAuth start
	* I0310 20:40:33.192626    9740 cli_runner.go:115] Run: docker container inspect -f "" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:33.805867    9740 provision.go:137] copyHostCerts
	* I0310 20:40:33.806994    9740 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 20:40:33.806994    9740 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 20:40:33.807815    9740 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 20:40:33.817781    9740 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 20:40:33.817781    9740 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 20:40:33.818437    9740 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 20:40:33.821902    9740 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 20:40:33.821902    9740 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 20:40:33.822805    9740 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 20:40:33.827713    9740 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-20210310201637-6496 san=[172.17.0.6 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20210310201637-6496]
	* I0310 20:40:34.224036    9740 provision.go:165] copyRemoteCerts
	* I0310 20:40:34.238770    9740 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 20:40:34.246644    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:34.940083    9740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:40:35.660682    9740 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4219137s)
	* I0310 20:40:35.661412    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	* I0310 20:40:39.042981    7808 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (14.9515789s)
	* I0310 20:40:39.043385    7808 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 from cache
	* I0310 20:40:39.043385    7808 cache_images.go:80] LoadImages completed in 11m5.8310622s
	* W0310 20:40:39.043385    7808 cache_images.go:215] Failed to load cached images for profile nospam-20210310201637-6496. make sure the profile is running. loading cached images: transferring cached image: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 20:40:39.043855    7808 cache_images.go:223] succeeded pushing to: 
	* I0310 20:40:39.043855    7808 cache_images.go:224] failed pushing to: nospam-20210310201637-6496
	* I0310 20:40:39.263748    7808 start.go:460] kubectl: 1.19.3, cluster: 1.20.2 (minor skew: 1)
	* I0310 20:40:39.268419    7808 out.go:129] * Done! kubectl is now configured to use "nospam-20210310201637-6496" cluster and "default" namespace by default
	* I0310 20:40:36.325306    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 20:40:36.822365    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1285 bytes)
	* I0310 20:40:37.608641    9740 provision.go:86] duration metric: configureAuth took 4.4259039s
	* I0310 20:40:37.608889    9740 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 20:40:37.619965    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:38.264560    9740 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:40:38.265920    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	* I0310 20:40:38.266230    9740 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 20:40:39.613773    9740 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 20:40:39.614278    9740 ubuntu.go:71] root file system type: overlay
	* I0310 20:40:39.615216    9740 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 20:40:39.623732    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:40.303496    9740 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:40:40.304774    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	* I0310 20:40:40.304774    9740 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 20:40:41.932482    9740 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 20:40:41.945466    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:42.589210    9740 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:40:42.589871    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	* I0310 20:40:42.589871    9740 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 20:40:43.918998    9740 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 20:40:43.918998    9740 machine.go:91] provisioned docker machine in 20.6350823s
	* I0310 20:40:43.918998    9740 start.go:267] post-start starting for "kubernetes-upgrade-20210310201637-6496" (driver="docker")
	* I0310 20:40:43.918998    9740 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 20:40:43.938642    9740 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 20:40:43.948150    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:44.545112    9740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:40:44.988836    9740 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0501955s)
	* I0310 20:40:45.001842    9740 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 20:40:45.053472    9740 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 20:40:45.053472    9740 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 20:40:45.053472    9740 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 20:40:45.053472    9740 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 20:40:45.053637    9740 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 20:40:45.053898    9740 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 20:40:45.058110    9740 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 20:40:45.059992    9740 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 20:40:45.075969    9740 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 20:40:45.187189    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 20:40:45.805941    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 20:40:46.188940    9740 start.go:270] post-start completed in 2.2699458s
	* I0310 20:40:46.201877    9740 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 20:40:46.209965    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:46.838195    9740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:40:47.387355    9740 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1854798s)
	* I0310 20:40:47.387355    9740 fix.go:57] fixHost completed within 29.5685454s
	* I0310 20:40:47.387355    9740 start.go:80] releasing machines lock for "kubernetes-upgrade-20210310201637-6496", held for 29.5687583s
	* I0310 20:40:47.399744    9740 cli_runner.go:115] Run: docker container inspect -f "" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:48.001319    9740 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 20:40:48.009785    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:48.027894    9740 ssh_runner.go:149] Run: systemctl --version
	* I0310 20:40:48.045435    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:48.719086    9740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:40:48.782836    9740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	* I0310 20:40:49.811562    9740 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.8100329s)
	* I0310 20:40:49.811709    9740 ssh_runner.go:189] Completed: systemctl --version: (1.7835855s)
	* I0310 20:40:49.832973    9740 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 20:40:50.108901    9740 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 20:40:50.273420    9740 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 20:40:50.285125    9740 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 20:40:50.431176    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 20:40:50.715534    9740 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 20:40:50.861429    9740 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 20:40:52.707170    9740 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.8457443s)
	* I0310 20:40:52.718914    9740 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 20:40:52.862081    9740 ssh_runner.go:149] Run: docker version --format 
	* I0310 20:40:53.882996    9740 ssh_runner.go:189] Completed: docker version --format : (1.0209171s)
	* I0310 20:40:53.887364    9740 out.go:150] * Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...
	* I0310 20:40:53.900335    9740 cli_runner.go:115] Run: docker exec -t kubernetes-upgrade-20210310201637-6496 dig +short host.docker.internal
	* I0310 20:40:55.114769    9740 cli_runner.go:168] Completed: docker exec -t kubernetes-upgrade-20210310201637-6496 dig +short host.docker.internal: (1.2138238s)
	* I0310 20:40:55.114991    9740 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 20:40:55.128899    9740 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 20:40:55.174235    9740 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 20:40:55.355581    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:40:55.951356    9740 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	* I0310 20:40:55.951695    9740 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4
	* I0310 20:40:55.964511    9740 ssh_runner.go:149] Run: docker images --format :
	* I0310 20:40:56.841181    9740 docker.go:423] Got preloaded images: -- stdout --
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/kube-proxy:v1.14.0
	* k8s.gcr.io/kube-controller-manager:v1.14.0
	* k8s.gcr.io/kube-apiserver:v1.14.0
	* k8s.gcr.io/kube-scheduler:v1.14.0
	* k8s.gcr.io/coredns:1.3.1
	* k8s.gcr.io/etcd:3.3.10
	* k8s.gcr.io/pause:3.1
	* 
	* -- /stdout --
	* I0310 20:40:56.841181    9740 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.5-rc.0 wasn't preloaded
	* I0310 20:40:56.860168    9740 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 20:40:56.958942    9740 ssh_runner.go:149] Run: which lz4
	* I0310 20:40:57.027629    9740 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 20:40:57.072452    9740 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 20:40:57.073663    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515786445 bytes)
	* I0310 20:41:03.249697    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: (51.4013566s)
	* I0310 20:41:03.249697    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 from cache
	* I0310 20:41:03.249697    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	* I0310 20:41:03.262479    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	* I0310 20:41:24.214911    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (54.1610286s)
	* I0310 20:41:24.214911    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 from cache
	* I0310 20:41:24.214911    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	* I0310 20:41:24.230173    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	* I0310 20:41:36.959014   10404 out.go:150]   - Generating certificates and keys ...
	* I0310 20:41:36.964677   10404 out.go:150]   - Booting up control plane ...
	* W0310 20:41:36.970811   10404 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	* stdout:
	* [init] Using Kubernetes version: v1.20.2
	* [preflight] Running pre-flight checks
	* [preflight] Pulling images required for setting up a Kubernetes cluster
	* [preflight] This might take a minute or two, depending on the speed of your internet connection
	* [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	* [certs] Using certificateDir folder "/var/lib/minikube/certs"
	* [certs] Using existing ca certificate authority
	* [certs] Using existing apiserver certificate and key on disk
	* [certs] Generating "apiserver-kubelet-client" certificate and key
	* [certs] Generating "front-proxy-ca" certificate and key
	* [certs] Generating "front-proxy-client" certificate and key
	* [certs] Generating "etcd/ca" certificate and key
	* [certs] Generating "etcd/server" certificate and key
	* [certs] etcd/server serving cert is signed for DNS names [cert-options-20210310203249-6496 localhost] and IPs [172.17.0.5 127.0.0.1 ::1]
	* [certs] Generating "etcd/peer" certificate and key
	* [certs] etcd/peer serving cert is signed for DNS names [cert-options-20210310203249-6496 localhost] and IPs [172.17.0.5 127.0.0.1 ::1]
	* [certs] Generating "etcd/healthcheck-client" certificate and key
	* [certs] Generating "apiserver-etcd-client" certificate and key
	* [certs] Generating "sa" key and public key
	* [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	* [kubeconfig] Writing "admin.conf" kubeconfig file
	* [kubeconfig] Writing "kubelet.conf" kubeconfig file
	* [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	* [kubeconfig] Writing "scheduler.conf" kubeconfig file
	* [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	* [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	* [kubelet-start] Starting the kubelet
	* [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	* [control-plane] Creating static Pod manifest for "kube-apiserver"
	* [control-plane] Creating static Pod manifest for "kube-controller-manager"
	* [control-plane] Creating static Pod manifest for "kube-scheduler"
	* [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	* [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	* [kubelet-check] Initial timeout of 40s passed.
	* 
	* 	Unfortunately, an error has occurred:
	* 		timed out waiting for the condition
	* 
	* 	This error is likely caused by:
	* 		- The kubelet is not running
	* 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	* 
	* 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	* 		- 'systemctl status kubelet'
	* 		- 'journalctl -xeu kubelet'
	* 
	* 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	* 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	* 
	* 	Here is one example how you may list all Kubernetes containers running in docker:
	* 		- 'docker ps -a | grep kube | grep -v pause'
	* 		Once you have found the failing container, you can inspect its logs with:
	* 		- 'docker logs CONTAINERID'
	* 
	* 
	* stderr:
	* 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	* 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	* 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	* 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
	* 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	* error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	* To see the stack trace of this error execute with --v=5 or higher
	* 
	* I0310 20:41:36.971225   10404 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	* I0310 20:42:11.370633    9740 docker.go:388] Took 74.358730 seconds to copy over tarball
	* I0310 20:42:11.387450    9740 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	* I0310 20:42:29.112922    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: (1m25.8502131s)
	* I0310 20:42:29.112922    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 from cache
	* I0310 20:42:29.113323    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	* I0310 20:42:29.124191    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	* I0310 20:42:29.912205    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: (1m5.6816904s)
	* I0310 20:42:29.916226    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 from cache
	* I0310 20:42:29.916226    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	* I0310 20:42:29.924045    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	* I0310 20:42:55.962507    9740 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (44.575119s)
	* I0310 20:42:55.962820    9740 ssh_runner.go:100] rm: /preloaded.tar.lz4
	* I0310 20:42:55.564122    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (25.6398471s)
	* I0310 20:42:55.564122    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 from cache
	* I0310 20:42:55.564122    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	* I0310 20:42:55.581284    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	* I0310 20:42:57.950604   10404 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m20.9791017s)
	* I0310 20:42:57.965692   10404 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	* I0310 20:42:58.119188   10404 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 20:42:58.947687   10404 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 20:42:58.957025   10404 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 20:42:59.093327   10404 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 20:42:59.093327   10404 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 20:42:57.905500    9740 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 20:42:57.977200    9740 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3145 bytes)
	* I0310 20:42:58.215816    9740 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 20:42:59.995138    9740 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.7790493s)
	* I0310 20:43:00.008630    9740 ssh_runner.go:149] Run: sudo systemctl restart docker
	* I0310 20:43:02.018131    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (32.8923025s)
	* I0310 20:43:02.018131    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 from cache
	* I0310 20:43:02.018131    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	* I0310 20:43:02.031430    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	* I0310 20:43:06.638441    9740 ssh_runner.go:189] Completed: sudo systemctl restart docker: (6.6298198s)
	* I0310 20:43:06.652024    9740 ssh_runner.go:149] Run: docker images --format :
	* I0310 20:43:07.606204    9740 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	* k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	* k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	* k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* <none>:<none>
	* <none>:<none>
	* <none>:<none>
	* <none>:<none>
	* <none>:<none>
	* <none>:<none>
	* <none>:<none>
	* 
	* -- /stdout --
	* I0310 20:43:07.606204    9740 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 20:43:07.617447    9740 ssh_runner.go:149] Run: docker info --format 
	* I0310 20:43:09.125267    9740 ssh_runner.go:189] Completed: docker info --format : (1.5069087s)
	* I0310 20:43:09.125267    9740 cni.go:74] Creating CNI manager for ""
	* I0310 20:43:09.125267    9740 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 20:43:09.125267    9740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 20:43:09.125267    9740 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.6 APIServerPort:8443 KubernetesVersion:v1.20.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210310201637-6496 NodeName:kubernetes-upgrade-20210310201637-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.6 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 20:43:09.134585    9740 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 172.17.0.6
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "kubernetes-upgrade-20210310201637-6496"
	*   kubeletExtraArgs:
	*     node-ip: 172.17.0.6
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "172.17.0.6"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.5-rc.0
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 20:43:09.134948    9740 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20210310201637-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.6
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.5-rc.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	* I0310 20:43:09.144021    9740 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.5-rc.0
	* I0310 20:43:09.216706    9740 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 20:43:09.234634    9740 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 20:43:09.349630    9740 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	* I0310 20:43:09.484300    9740 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	* I0310 20:43:09.702141    9740 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1869 bytes)
	* I0310 20:43:10.020565    9740 ssh_runner.go:149] Run: grep 172.17.0.6	control-plane.minikube.internal$ /etc/hosts
	* I0310 20:43:10.089488    9740 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 20:43:10.265577    9740 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496 for IP: 172.17.0.6
	* I0310 20:43:10.266685    9740 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 20:43:10.266685    9740 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 20:43:10.267449    9740 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\client.key
	* I0310 20:43:10.267728    9740 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key.76cb2290
	* I0310 20:43:10.268060    9740 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.key
	* I0310 20:43:10.269677    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 20:43:10.270201    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.270201    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 20:43:10.270652    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.270652    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 20:43:10.271021    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.271021    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 20:43:10.271539    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.271539    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 20:43:10.271770    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.271770    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 20:43:10.272221    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.272221    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 20:43:10.272461    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.272645    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 20:43:10.272789    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.272996    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 20:43:10.273341    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.273341    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 20:43:10.273607    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.273769    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 20:43:10.273960    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.273960    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 20:43:10.274450    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.274637    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 20:43:10.275071    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.275240    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 20:43:10.275545    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.276229    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 20:43:10.276444    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.276720    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 20:43:10.277006    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.277169    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 20:43:10.277568    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.277734    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 20:43:10.278059    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.278347    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 20:43:10.278641    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.278786    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 20:43:10.279075    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.279345    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 20:43:10.279572    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.279572    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 20:43:10.279572    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.280212    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 20:43:10.280212    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.280620    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 20:43:10.280620    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.280620    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 20:43:10.281246    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.281246    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 20:43:10.281699    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.281699    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 20:43:10.281699    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.282301    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 20:43:10.282301    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.282699    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 20:43:10.282699    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.282699    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 20:43:10.283317    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.283317    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 20:43:10.283317    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.283713    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 20:43:10.283713    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.283713    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 20:43:10.283713    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.283713    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 20:43:10.284555    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.284555    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 20:43:10.284555    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.284555    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 20:43:10.284555    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.285524    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 20:43:10.285524    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.285524    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 20:43:10.285524    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.285524    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 20:43:10.286529    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 20:43:10.286529    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 20:43:10.286529    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 20:43:10.286529    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 20:43:10.287526    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 20:43:10.294033    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 20:43:10.690141    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	* I0310 20:43:11.022056    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 20:43:11.366780    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	* I0310 20:43:11.738170    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 20:43:12.041730    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 20:43:12.498359    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 20:43:12.700195    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 20:43:13.049208    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 20:43:13.310733    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 20:43:13.534309    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 20:43:13.882850    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 20:43:14.233126    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 20:43:14.484549    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 20:43:14.709281    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 20:43:15.032054    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 20:43:15.349068    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 20:43:15.682792    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 20:43:16.017709    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 20:43:16.303966    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 20:43:16.488745    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 20:43:16.814106    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 20:43:17.177281    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 20:43:17.575665    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 20:43:17.931051    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 20:43:18.253811    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 20:43:18.573008    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 20:43:18.822304    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 20:43:19.138757    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 20:43:19.376578    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 20:43:19.712680    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 20:43:20.001669    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 20:43:20.271385    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 20:43:20.502633    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 20:43:20.800521    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 20:43:21.093606    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 20:43:21.465420    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 20:43:21.784777    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 20:43:22.032356    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 20:43:22.363274    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 20:43:22.712989    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 20:43:23.099056    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 20:43:23.408408    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 20:43:23.672678    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 20:43:23.921038    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 20:43:24.342930    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 20:43:24.663179    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 20:43:24.910639    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 20:43:25.200746    9740 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 20:43:25.439872    9740 ssh_runner.go:149] Run: openssl version
	* I0310 20:43:25.507866    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 20:43:25.589879    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 20:43:25.620711    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 20:43:25.633770    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 20:43:25.769482    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:25.883930    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 20:43:25.965927    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 20:43:26.002150    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 20:43:26.015770    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 20:43:26.157026    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:26.007480    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (23.976084s)
	* I0310 20:43:26.008152    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 from cache
	* I0310 20:43:26.008554    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:43:26.022245    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:43:26.314947    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 20:43:26.462308    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 20:43:26.516418    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 20:43:26.528884    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 20:43:26.664469    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:26.782235    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 20:43:26.867982    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 20:43:26.905162    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 20:43:26.917984    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 20:43:26.976936    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:27.044425    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 20:43:27.127900    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 20:43:27.154767    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 20:43:27.167534    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 20:43:27.245667    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:27.301969    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 20:43:27.458937    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 20:43:27.518220    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 20:43:27.537810    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 20:43:27.681710    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:27.782667    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 20:43:27.934572    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 20:43:27.981053    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 20:43:28.006920    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 20:43:28.097343    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:28.233259    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 20:43:28.350581    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 20:43:28.410979    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 20:43:28.433063    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 20:43:28.559700    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:28.823306    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 20:43:29.049275    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 20:43:29.100344    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 20:43:29.124107    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 20:43:29.270728    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:29.488322    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 20:43:29.633006    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 20:43:29.699773    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 20:43:29.710208    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 20:43:29.781308    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:29.849114    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 20:43:29.990898    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 20:43:30.019724    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 20:43:30.031881    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 20:43:30.110683    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:30.177329    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 20:43:30.281916    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 20:43:30.328408    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 20:43:30.353932    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 20:43:30.413554    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:30.505117    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 20:43:30.625696    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 20:43:30.657868    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 20:43:30.671673    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 20:43:30.856252    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:30.941275    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 20:43:31.034827    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 20:43:31.075529    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 20:43:31.088302    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 20:43:31.152406    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:31.232263    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* W0310 20:43:28.283488   21276 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	* stdout:
	* [init] Using Kubernetes version: v1.20.2
	* [preflight] Running pre-flight checks
	* [preflight] Pulling images required for setting up a Kubernetes cluster
	* [preflight] This might take a minute or two, depending on the speed of your internet connection
	* [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	* [certs] Using certificateDir folder "/var/lib/minikube/certs"
	* [certs] Using existing ca certificate authority
	* [certs] Using existing apiserver certificate and key on disk
	* [certs] Generating "apiserver-kubelet-client" certificate and key
	* [certs] Generating "front-proxy-ca" certificate and key
	* [certs] Generating "front-proxy-client" certificate and key
	* [certs] Generating "etcd/ca" certificate and key
	* [certs] Generating "etcd/server" certificate and key
	* [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-20210310203447-6496 localhost] and IPs [172.17.0.4 127.0.0.1 ::1]
	* [certs] Generating "etcd/peer" certificate and key
	* [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-20210310203447-6496 localhost] and IPs [172.17.0.4 127.0.0.1 ::1]
	* [certs] Generating "etcd/healthcheck-client" certificate and key
	* [certs] Generating "apiserver-etcd-client" certificate and key
	* [certs] Generating "sa" key and public key
	* [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	* [kubeconfig] Writing "admin.conf" kubeconfig file
	* [kubeconfig] Writing "kubelet.conf" kubeconfig file
	* [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	* [kubeconfig] Writing "scheduler.conf" kubeconfig file
	* [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	* [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	* [kubelet-start] Starting the kubelet
	* [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	* [control-plane] Creating static Pod manifest for "kube-apiserver"
	* [control-plane] Creating static Pod manifest for "kube-controller-manager"
	* [control-plane] Creating static Pod manifest for "kube-scheduler"
	* [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	* [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	* [kubelet-check] Initial timeout of 40s passed.
	* 
	* 	Unfortunately, an error has occurred:
	* 		timed out waiting for the condition
	* 
	* 	This error is likely caused by:
	* 		- The kubelet is not running
	* 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	* 
	* 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	* 		- 'systemctl status kubelet'
	* 		- 'journalctl -xeu kubelet'
	* 
	* 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	* 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	* 
	* 	Here is one example how you may list all Kubernetes containers running in docker:
	* 		- 'docker ps -a | grep kube | grep -v pause'
	* 		Once you have found the failing container, you can inspect its logs with:
	* 		- 'docker logs CONTAINERID'
	* 
	* 
	* stderr:
	* 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	* 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	* 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
	* 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	* error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	* To see the stack trace of this error execute with --v=5 or higher
	* 
	* I0310 20:43:28.286013   21276 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	* I0310 20:43:31.323994    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:43:31.356295    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:43:31.363493    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:43:31.441399    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 20:43:31.542997    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 20:43:31.735566    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 20:43:31.781889    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 20:43:31.802185    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 20:43:31.898673    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:32.025326    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 20:43:32.124993    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 20:43:32.159171    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 20:43:32.164045    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 20:43:32.242050    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:32.353777    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 20:43:32.442208    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 20:43:32.473317    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 20:43:32.488648    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 20:43:32.576064    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:32.699685    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 20:43:32.854066    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 20:43:32.923900    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 20:43:32.932913    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 20:43:33.013454    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:33.086597    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 20:43:33.207468    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 20:43:33.232721    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 20:43:33.247885    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 20:43:33.299737    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:33.378904    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 20:43:33.479043    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 20:43:33.511016    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 20:43:33.522431    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 20:43:33.590494    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:33.675096    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 20:43:33.834087    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 20:43:33.910774    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 20:43:33.921456    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 20:43:34.035403    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:34.190432    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 20:43:34.296231    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 20:43:34.333164    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 20:43:34.344921    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 20:43:34.454850    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:34.560630    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 20:43:34.674102    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 20:43:34.714367    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 20:43:34.724462    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 20:43:34.770949    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:34.825171    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 20:43:34.932747    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 20:43:34.979505    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 20:43:34.989780    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 20:43:35.074345    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:35.178899    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 20:43:35.344157    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 20:43:35.407088    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 20:43:35.421935    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 20:43:35.506568    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:35.636188    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 20:43:35.806549    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 20:43:35.845252    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 20:43:35.855067    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 20:43:35.947625    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:36.092481    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 20:43:36.231422    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 20:43:36.267498    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 20:43:32.745668    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (37.164152s)
	* I0310 20:43:32.745668    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 from cache
	* I0310 20:43:32.745668    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	* I0310 20:43:32.756293    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	* I0310 20:43:36.282581    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 20:43:36.411927    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:36.510587    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 20:43:36.619525    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 20:43:36.676251    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 20:43:36.686090    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 20:43:36.748928    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:36.839144    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 20:43:36.968040    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 20:43:37.076803    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 20:43:37.090806    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 20:43:37.173879    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:37.283979    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 20:43:37.412917    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 20:43:37.496109    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 20:43:37.514167    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 20:43:37.733832    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:37.862256    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 20:43:37.946409    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 20:43:37.982328    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 20:43:37.991856    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 20:43:38.049635    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:38.138931    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 20:43:38.257549    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 20:43:38.296273    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 20:43:38.306568    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 20:43:38.353537    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:38.440358    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 20:43:38.687763    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 20:43:38.760621    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 20:43:38.775303    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 20:43:38.887508    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:39.109869    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 20:43:39.334686    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 20:43:39.381001    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 20:43:39.398216    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 20:43:39.517736    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:39.587752    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 20:43:39.654817    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 20:43:39.680405    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 20:43:39.687029    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 20:43:39.774960    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:39.851392    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 20:43:39.937768    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 20:43:39.972501    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 20:43:39.991366    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 20:43:40.063146    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:40.173682    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 20:43:40.266996    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 20:43:40.314983    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 20:43:40.326412    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 20:43:40.390447    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:40.532987    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 20:43:41.227347    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 20:43:41.296128    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 20:43:41.316518    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 20:43:41.367884    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:41.463648    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 20:43:41.626599    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 20:43:41.654586    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 20:43:41.662529    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 20:43:41.740844    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 20:43:41.796078    9740 kubeadm.go:385] StartCluster: {Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 20:43:41.813711    9740 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 20:43:42.765590    9740 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 20:43:42.904770    9740 kubeadm.go:396] found existing configuration files, will attempt cluster restart
	* I0310 20:43:42.905260    9740 kubeadm.go:594] restartCluster start
	* I0310 20:43:42.915361    9740 ssh_runner.go:149] Run: sudo test -d /data/minikube
	* I0310 20:43:43.033241    9740 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 20:43:43.042892    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	* I0310 20:43:43.694855    9740 kubeconfig.go:117] verify returned: extract IP: "kubernetes-upgrade-20210310201637-6496" does not appear in C:\Users\jenkins/.kube/config
	* I0310 20:43:43.696778    9740 kubeconfig.go:128] "kubernetes-upgrade-20210310201637-6496" context is missing from C:\Users\jenkins/.kube/config - will repair!
	* I0310 20:43:43.698888    9740 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:43:43.723114    9740 kapi.go:59] client config for kubernetes-upgrade-20210310201637-6496: &rest.Config{Host:"https://127.0.0.1:55130", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\kubernetes-upgrade-20210310201637-6496/client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\kubernetes-upgrade-20210310201637-6496/client.key", CAFile:"C:\\Users\\jenkins\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	* I0310 20:43:43.759360    9740 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	* I0310 20:43:43.851753    9740 kubeadm.go:562] needs reconfigure: configs differ:
	* -- stdout --
	* --- /var/tmp/minikube/kubeadm.yaml	2021-03-10 20:25:35.536491000 +0000
	* +++ /var/tmp/minikube/kubeadm.yaml.new	2021-03-10 20:43:09.982217000 +0000
	* @@ -1,4 +1,4 @@
	* -apiVersion: kubeadm.k8s.io/v1beta1
	* +apiVersion: kubeadm.k8s.io/v1beta2
	*  kind: InitConfiguration
	*  localAPIEndpoint:
	*    advertiseAddress: 172.17.0.6
	* @@ -17,7 +17,7 @@
	*      node-ip: 172.17.0.6
	*    taints: []
	*  ---
	* -apiVersion: kubeadm.k8s.io/v1beta1
	* +apiVersion: kubeadm.k8s.io/v1beta2
	*  kind: ClusterConfiguration
	*  apiServer:
	*    certSANs: ["127.0.0.1", "localhost", "172.17.0.6"]
	* @@ -31,7 +31,7 @@
	*    extraArgs:
	*      leader-elect: "false"
	*  certificatesDir: /var/lib/minikube/certs
	* -clusterName: kubernetes-upgrade-20210310201637-6496
	* +clusterName: mk
	*  controlPlaneEndpoint: control-plane.minikube.internal:8443
	*  dns:
	*    type: CoreDNS
	* @@ -39,8 +39,8 @@
	*    local:
	*      dataDir: /var/lib/minikube/etcd
	*      extraArgs:
	* -      listen-metrics-urls: http://127.0.0.1:2381,http://172.17.0.6:2381
	* -kubernetesVersion: v1.14.0
	* +      proxy-refresh-interval: "70000"
	* +kubernetesVersion: v1.20.5-rc.0
	*  networking:
	*    dnsDomain: cluster.local
	*    podSubnet: "10.244.0.0/16"
	* 
	* -- /stdout --
	* I0310 20:43:43.852771    9740 kubeadm.go:1042] stopping kube-system containers ...
	* I0310 20:43:43.867059    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 20:43:44.708564    9740 docker.go:261] Stopping containers: [66d44e1d7560 cc5bf7d7971c adb946d74113 a58acfb13228 db5e367b5040 d04b7875ec72 9c19e8c632c1 9181b2a8d2e7 e5fa65579d8b]
	* I0310 20:43:44.727055    9740 ssh_runner.go:149] Run: docker stop 66d44e1d7560 cc5bf7d7971c adb946d74113 a58acfb13228 db5e367b5040 d04b7875ec72 9c19e8c632c1 9181b2a8d2e7 e5fa65579d8b
	* I0310 20:43:45.531827    9740 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	* I0310 20:43:45.663853    9740 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 20:43:45.745733    9740 kubeadm.go:153] found existing configuration files:
	* -rw------- 1 root root 5759 Mar 10 20:33 /etc/kubernetes/admin.conf
	* -rw------- 1 root root 5791 Mar 10 20:33 /etc/kubernetes/controller-manager.conf
	* -rw------- 1 root root 5955 Mar 10 20:33 /etc/kubernetes/kubelet.conf
	* -rw------- 1 root root 5739 Mar 10 20:33 /etc/kubernetes/scheduler.conf
	* 
	* I0310 20:43:45.755747    9740 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	* I0310 20:43:45.842396    9740 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	* I0310 20:43:45.935813    9740 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	* I0310 20:43:46.074999    9740 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	* I0310 20:43:46.174689    9740 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 20:43:46.255242    9740 kubeadm.go:670] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	* I0310 20:43:46.255242    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 20:43:49.485587    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": (3.230349s)
	* I0310 20:43:49.485587    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 20:44:01.034678    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (11.5491058s)
	* I0310 20:44:01.034678    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 20:43:57.799249    6776 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (31.7765479s)
	* I0310 20:43:57.799249    6776 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 from cache
	* I0310 20:43:57.800007    6776 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 20:43:57.823849    6776 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 20:44:05.800564    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (4.7658926s)
	* I0310 20:44:05.800564    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 20:44:04.083338    7164 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (31.326944s)
	* I0310 20:44:04.083338    7164 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 from cache
	* I0310 20:44:04.083338    7164 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	* I0310 20:44:04.093859    7164 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	* I0310 20:44:11.761956    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml": (5.9608953s)
	* I0310 20:44:11.761956    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 20:44:19.173690    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml": (7.4117438s)
	* I0310 20:44:19.174054    9740 api_server.go:48] waiting for apiserver process to appear ...
	* I0310 20:44:19.183695    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:20.194468    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:44:21.198362    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:43:48.467946    9588 out.go:340] unable to execute * 2021-03-10 20:42:26.838852 W | etcdserver: request "header:<ID:12691275819406794057 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:520 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3866 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>" with result "size:16" took too long (520.0428ms) to execute
	: html/template:* 2021-03-10 20:42:26.838852 W | etcdserver: request "header:<ID:12691275819406794057 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:520 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3866 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>" with result "size:16" took too long (520.0428ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:43:48.516192    9588 out.go:340] unable to execute * 2021-03-10 20:42:51.140045 W | etcdserver: request "header:<ID:12691275819406794145 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-74ff55c5b-mrl7v.166b1581c9e59840\" mod_revision:637 > success:<request_put:<key:\"/registry/events/kube-system/coredns-74ff55c5b-mrl7v.166b1581c9e59840\" value_size:806 lease:3467903782552018226 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-74ff55c5b-mrl7v.166b1581c9e59840\" > >>" with result "size:16" took too long (126.0661ms) to execute
	: html/template:* 2021-03-10 20:42:51.140045 W | etcdserver: request "header:<ID:12691275819406794145 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-74ff55c5b-mrl7v.166b1581c9e59840\" mod_revision:637 > success:<request_put:<key:\"/registry/events/kube-system/coredns-74ff55c5b-mrl7v.166b1581c9e59840\" value_size:806 lease:3467903782552018226 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-74ff55c5b-mrl7v.166b1581c9e59840\" > >>" with result "size:16" took too long (126.0661ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:43:48.530933    9588 out.go:340] unable to execute * 2021-03-10 20:43:05.089312 W | etcdserver: request "header:<ID:12691275819406794212 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-nospam-20210310201637-6496.166b15dc332a1240\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-nospam-20210310201637-6496.166b15dc332a1240\" value_size:778 lease:3467903782552018226 >> failure:<>>" with result "size:16" took too long (165.937ms) to execute
	: html/template:* 2021-03-10 20:43:05.089312 W | etcdserver: request "header:<ID:12691275819406794212 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-nospam-20210310201637-6496.166b15dc332a1240\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-nospam-20210310201637-6496.166b15dc332a1240\" value_size:778 lease:3467903782552018226 >> failure:<>>" with result "size:16" took too long (165.937ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:43:48.538953    9588 out.go:340] unable to execute * 2021-03-10 20:43:05.415047 W | etcdserver: request "header:<ID:12691275819406794213 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/nospam-20210310201637-6496\" mod_revision:645 > success:<request_put:<key:\"/registry/leases/kube-node-lease/nospam-20210310201637-6496\" value_size:590 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/nospam-20210310201637-6496\" > >>" with result "size:16" took too long (110.3705ms) to execute
	: html/template:* 2021-03-10 20:43:05.415047 W | etcdserver: request "header:<ID:12691275819406794213 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/nospam-20210310201637-6496\" mod_revision:645 > success:<request_put:<key:\"/registry/leases/kube-node-lease/nospam-20210310201637-6496\" value_size:590 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/nospam-20210310201637-6496\" > >>" with result "size:16" took too long (110.3705ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:43:48.546638    9588 out.go:340] unable to execute * 2021-03-10 20:43:05.657557 W | etcdserver: request "header:<ID:12691275819406794214 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/nospam-20210310201637-6496\" mod_revision:657 > success:<request_put:<key:\"/registry/minions/nospam-20210310201637-6496\" value_size:5688 >> failure:<request_range:<key:\"/registry/minions/nospam-20210310201637-6496\" > >>" with result "size:16" took too long (219.5276ms) to execute
	: html/template:* 2021-03-10 20:43:05.657557 W | etcdserver: request "header:<ID:12691275819406794214 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/nospam-20210310201637-6496\" mod_revision:657 > success:<request_put:<key:\"/registry/minions/nospam-20210310201637-6496\" value_size:5688 >> failure:<request_range:<key:\"/registry/minions/nospam-20210310201637-6496\" > >>" with result "size:16" took too long (219.5276ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:43:48.571947    9588 out.go:340] unable to execute * 2021-03-10 20:43:06.079449 W | etcdserver: request "header:<ID:12691275819406794218 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210\" mod_revision:652 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210\" value_size:799 lease:3467903782552018226 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210\" > >>" with result "size:16" took too long (108.6118ms) to execute
	: html/template:* 2021-03-10 20:43:06.079449 W | etcdserver: request "header:<ID:12691275819406794218 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210\" mod_revision:652 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210\" value_size:799 lease:3467903782552018226 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-controller-manager-nospam-20210310201637-6496.166b15d83f67f210\" > >>" with result "size:16" took too long (108.6118ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:44:21.172984    9588 out.go:335] unable to parse "* I0310 20:40:12.250130    9740 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:40:12.250130    9740 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:44:21.180566    9588 out.go:335] unable to parse "* I0310 20:40:13.269960    9740 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0193702s)\n": template: * I0310 20:40:13.269960    9740 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0193702s)
	:1: function "json" not defined - returning raw string.
	E0310 20:44:21.216883    9588 out.go:335] unable to parse "* I0310 20:40:15.341033    9740 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:40:15.341033    9740 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:44:21.230643    9588 out.go:335] unable to parse "* I0310 20:40:16.550107    9740 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.209076s)\n": template: * I0310 20:40:16.550107    9740 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.209076s)
	:1: function "json" not defined - returning raw string.
	E0310 20:44:21.360094    9588 out.go:340] unable to execute * I0310 20:40:23.291722    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:23.291722    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:23.291722    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:21.371455    9588 out.go:335] unable to parse "* I0310 20:40:23.889382    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}\n": template: * I0310 20:40:23.889382    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:44:21.434479    9588 out.go:340] unable to execute * I0310 20:40:31.285524    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:31.285524    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:31.285524    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:21.445870    9588 out.go:335] unable to parse "* I0310 20:40:31.950149    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}\n": template: * I0310 20:40:31.950149    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:44:21.570963    9588 out.go:340] unable to execute * I0310 20:40:34.246644    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:34.246644    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:34.246644    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:21.640607    9588 out.go:340] unable to execute * I0310 20:40:37.619965    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:37.619965    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:37.619965    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:21.653847    9588 out.go:335] unable to parse "* I0310 20:40:38.265920    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}\n": template: * I0310 20:40:38.265920    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:44:21.685893    9588 out.go:340] unable to execute * I0310 20:40:39.623732    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:39.623732    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:39.623732    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:21.700619    9588 out.go:335] unable to parse "* I0310 20:40:40.304774    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}\n": template: * I0310 20:40:40.304774    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:44:22.119646    9588 out.go:340] unable to execute * I0310 20:40:41.945466    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:41.945466    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:41.945466    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:22.136244    9588 out.go:335] unable to parse "* I0310 20:40:42.589871    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}\n": template: * I0310 20:40:42.589871    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:44:22.172205    9588 out.go:340] unable to execute * I0310 20:40:43.948150    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:43.948150    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:43.948150    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:22.264723    9588 out.go:340] unable to execute * I0310 20:40:46.209965    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:46.209965    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:46.209965    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:22.291828    9588 out.go:340] unable to execute * I0310 20:40:48.009785    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:48.009785    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:48.009785    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:22.309099    9588 out.go:340] unable to execute * I0310 20:40:48.045435    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:48.045435    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:48.045435    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:22.409085    9588 out.go:340] unable to execute * I0310 20:40:55.355581    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:40:55.355581    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:40:55.355581    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:44:25.281314    9588 out.go:340] unable to execute * I0310 20:43:43.042892    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	: template: * I0310 20:43:43.042892    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	:1:96: executing "* I0310 20:43:43.042892    9740 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" kubernetes-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p nospam-20210310201637-6496 -n nospam-20210310201637-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p nospam-20210310201637-6496 -n nospam-20210310201637-6496: (8.2133907s)
helpers_test.go:257: (dbg) Run:  kubectl --context nospam-20210310201637-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:257: (dbg) Done: kubectl --context nospam-20210310201637-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.369165s)
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestErrorSpam]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context nospam-20210310201637-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context nospam-20210310201637-6496 describe pod : exit status 1 (239.0429ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context nospam-20210310201637-6496 describe pod : exit status 1
helpers_test.go:171: Cleaning up "nospam-20210310201637-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p nospam-20210310201637-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p nospam-20210310201637-6496: (22.1653228s)
--- FAIL: TestErrorSpam (1702.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (19.7s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:397: (dbg) Run:  out\kubectl --context functional-20210310191609-6496 get pods
functional_test.go:397: (dbg) Non-zero exit: out\kubectl --context functional-20210310191609-6496 get pods: exec: "out\\kubectl": file does not exist (0s)
functional_test.go:399: failed to run kubectl directly. args "out\\kubectl --context functional-20210310191609-6496 get pods": exec: "out\\kubectl": file does not exist
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect functional-20210310191609-6496
helpers_test.go:231: (dbg) docker inspect functional-20210310191609-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939",
	        "Created": "2021-03-10T19:16:21.3827053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19629,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T19:16:22.633721Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/hosts",
	        "LogPath": "/var/lib/docker/containers/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939-json.log",
	        "Name": "/functional-20210310191609-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20210310191609-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20210310191609-6496",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a64bc965504dadb26a6b09e565a5346138fc3887af6c9d3d4f52f649a4e3dbbd-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a64bc965504dadb26a6b09e565a5346138fc3887af6c9d3d4f52f649a4e3dbbd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a64bc965504dadb26a6b09e565a5346138fc3887af6c9d3d4f52f649a4e3dbbd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a64bc965504dadb26a6b09e565a5346138fc3887af6c9d3d4f52f649a4e3dbbd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20210310191609-6496",
	                "Source": "/var/lib/docker/volumes/functional-20210310191609-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20210310191609-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20210310191609-6496",
	                "name.minikube.sigs.k8s.io": "functional-20210310191609-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "654c08b325fa53207f6e230c568d3463f359afca0d7983f08a2cc1a320ecda5f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55009"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55008"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55005"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55006"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/654c08b325fa",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20210310191609-6496": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.97"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f0e67f0e0197",
	                        "functional-20210310191609-6496"
	                    ],
	                    "NetworkID": "2f4279ec0a83c0de1765b109cd172864e996066e0bc6a9bf6eb83db56ffdda48",
	                    "EndpointID": "3933dabd8ffc19bc09f3b793d526cbcd953070264dde135241cd56427f7ca44b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:61",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20210310191609-6496 -n functional-20210310191609-6496
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20210310191609-6496 -n functional-20210310191609-6496: (2.9597861s)
helpers_test.go:240: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 logs -n 25
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 logs -n 25: (11.6279334s)
helpers_test.go:248: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 19:16:23 UTC, end at Wed 2021-03-10 19:21:27 UTC. --
	* Mar 10 19:17:33 functional-20210310191609-6496 systemd[1]: docker.service: Succeeded.
	* Mar 10 19:17:33 functional-20210310191609-6496 systemd[1]: Stopped Docker Application Container Engine.
	* Mar 10 19:17:33 functional-20210310191609-6496 systemd[1]: Starting Docker Application Container Engine...
	* Mar 10 19:17:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:33.689955900Z" level=info msg="Starting up"
	* Mar 10 19:17:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:33.695459700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 19:17:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:33.695634500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 19:17:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:33.695817400Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 19:17:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:33.695973100Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 19:17:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:33.700225100Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 19:17:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:33.700499400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 19:17:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:33.700544800Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 19:17:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:33.700564600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 19:17:38 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:38.473299800Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 19:17:38 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:38.498692700Z" level=info msg="Loading containers: start."
	* Mar 10 19:17:38 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:38.922719400Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 19:17:39 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:39.106894300Z" level=info msg="Loading containers: done."
	* Mar 10 19:17:39 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:39.179094000Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 19:17:39 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:39.179271600Z" level=info msg="Daemon has completed initialization"
	* Mar 10 19:17:39 functional-20210310191609-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 19:17:39 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:39.263627000Z" level=info msg="API listen on [::]:2376"
	* Mar 10 19:17:39 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:17:39.274040100Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 19:18:37 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:18:37.965135000Z" level=error msg="stream copy error: reading from a closed fifo"
	* Mar 10 19:18:37 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:18:37.970784500Z" level=error msg="stream copy error: reading from a closed fifo"
	* Mar 10 19:18:38 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:18:38.655953800Z" level=error msg="86f088818d0b95f45335296d9c94a3142684a2943a9c5bebfb52aa375d964c18 cleanup: failed to delete container from containerd: no such container"
	* Mar 10 19:18:38 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:18:38.656494200Z" level=error msg="Handler for POST /v1.40/containers/86f088818d0b95f45335296d9c94a3142684a2943a9c5bebfb52aa375d964c18/start returned error: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: time=\"2021-03-10T19:18:37Z\" level=fatal msg=\"no such file or directory\": unknown"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 42fe31721a7a0       85069258b98ac       2 minutes ago       Running             storage-provisioner       0                   612593eab637b
	* 05d191ffdb7d2       bfe3a36ebd252       2 minutes ago       Running             coredns                   0                   a135c3a8b1c4b
	* b384eab6bf968       43154ddb57a83       2 minutes ago       Running             kube-proxy                0                   264af7edaaccf
	* 53e6441ccde57       0369cf4303ffd       3 minutes ago       Running             etcd                      0                   0608fef2accd6
	* 3ef151b16cbf4       a8c2fdb8bf76e       3 minutes ago       Running             kube-apiserver            0                   3746c3619218f
	* 518bc4942e786       ed2c44fbdd78b       3 minutes ago       Running             kube-scheduler            0                   b49f3dbd8888f
	* 929909ec81114       a27166429d98e       3 minutes ago       Running             kube-controller-manager   0                   86bce7009e19b
	* 
	* ==> coredns [05d191ffdb7d] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               functional-20210310191609-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=functional-20210310191609-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=functional-20210310191609-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T19_18_18_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 19:18:12 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  functional-20210310191609-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 19:21:21 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 19:20:56 +0000   Wed, 10 Mar 2021 19:18:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 19:20:56 +0000   Wed, 10 Mar 2021 19:18:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 19:20:56 +0000   Wed, 10 Mar 2021 19:18:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 19:20:56 +0000   Wed, 10 Mar 2021 19:18:31 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.97
	*   Hostname:    functional-20210310191609-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                998d3515-9968-4eb1-814a-bc80eeeac66f
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-74ff55c5b-62r9g                                   100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m55s
	*   kube-system                 etcd-functional-20210310191609-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         3m5s
	*   kube-system                 kube-apiserver-functional-20210310191609-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         3m5s
	*   kube-system                 kube-controller-manager-functional-20210310191609-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         3m5s
	*   kube-system                 kube-proxy-l9bb9                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	*   kube-system                 kube-scheduler-functional-20210310191609-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         3m5s
	*   kube-system                 storage-provisioner                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                    From        Message
	*   ----    ------                   ----                   ----        -------
	*   Normal  NodeHasSufficientMemory  3m27s (x7 over 3m28s)  kubelet     Node functional-20210310191609-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3m27s (x7 over 3m28s)  kubelet     Node functional-20210310191609-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3m27s (x6 over 3m28s)  kubelet     Node functional-20210310191609-6496 status is now: NodeHasSufficientPID
	*   Normal  Starting                 3m7s                   kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  3m7s                   kubelet     Node functional-20210310191609-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3m7s                   kubelet     Node functional-20210310191609-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3m7s                   kubelet     Node functional-20210310191609-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             3m6s                   kubelet     Node functional-20210310191609-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  3m6s                   kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                2m56s                  kubelet     Node functional-20210310191609-6496 status is now: NodeReady
	*   Normal  Starting                 2m48s                  kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000001]  hrtimer_wakeup+0x1e/0x21
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [53e6441ccde5] <==
	* 2021-03-10 19:18:13.547255 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:18:31.792589 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:18:33.472619 W | etcdserver: request "header:<ID:10490704450423578322 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-l9bb9.166b114ce3ebff9c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-l9bb9.166b114ce3ebff9c\" value_size:678 lease:1267332413568802112 >> failure:<>>" with result "size:16" took too long (104.07ms) to execute
	* 2021-03-10 19:18:33.475253 W | etcdserver: read-only range request "key:\"/registry/endpointslices/kube-system/kube-dns-6llt6\" " with result "range_response_count:1 size:1018" took too long (123.9394ms) to execute
	* 2021-03-10 19:18:33.475676 W | etcdserver: read-only range request "key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" " with result "range_response_count:1 size:3716" took too long (122.0595ms) to execute
	* 2021-03-10 19:18:34.357536 W | etcdserver: request "header:<ID:10490704450423578352 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" mod_revision:425 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" value_size:3642 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" > >>" with result "size:16" took too long (109.3333ms) to execute
	* 2021-03-10 19:18:35.003167 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:18:40.506833 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:8 size:40987" took too long (104.5598ms) to execute
	* 2021-03-10 19:18:45.035754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:18:55.038680 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:04.993917 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:14.993996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:24.994220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:34.995374 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:44.998685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:54.996039 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:04.997415 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:14.997241 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:24.995916 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:35.003865 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:44.993181 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:54.990512 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:21:05.075092 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:21:14.997501 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:21:24.996927 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  19:21:28 up 21 min,  0 users,  load average: 4.96, 4.75, 3.97
	* Linux functional-20210310191609-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [3ef151b16cbf] <==
	* I0310 19:18:15.305126       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* W0310 19:18:15.533202       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.97]
	* I0310 19:18:15.535719       1 controller.go:609] quota admission added evaluator for: endpoints
	* I0310 19:18:15.546621       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	* I0310 19:18:16.455968       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	* I0310 19:18:18.151781       1 controller.go:609] quota admission added evaluator for: deployments.apps
	* I0310 19:18:18.469122       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	* I0310 19:18:20.568395       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	* I0310 19:18:32.736236       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	* I0310 19:18:32.765499       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	* I0310 19:18:49.935798       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:18:49.935887       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:18:49.935910       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:19:20.721902       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:19:20.721968       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:19:20.722020       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:19:58.866927       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:19:58.867092       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:19:58.867117       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:20:31.319212       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:20:31.319362       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:20:31.319396       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:21:07.942122       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:21:07.942532       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:21:07.942555       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [929909ec8111] <==
	* I0310 19:18:32.766472       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	* I0310 19:18:32.766479       1 shared_informer.go:247] Caches are synced for cidrallocator 
	* I0310 19:18:32.767492       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I0310 19:18:32.769112       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	* I0310 19:18:32.772493       1 shared_informer.go:247] Caches are synced for TTL 
	* I0310 19:18:32.774480       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 19:18:32.835513       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 19:18:32.847183       1 shared_informer.go:247] Caches are synced for taint 
	* I0310 19:18:32.847487       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* W0310 19:18:32.847583       1 node_lifecycle_controller.go:1044] Missing timestamp for Node functional-20210310191609-6496. Assuming now as a timestamp.
	* I0310 19:18:32.847644       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I0310 19:18:32.847830       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* I0310 19:18:32.848082       1 event.go:291] "Event occurred" object="functional-20210310191609-6496" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20210310191609-6496 event: Registered Node functional-20210310191609-6496 in Controller"
	* I0310 19:18:32.862696       1 range_allocator.go:373] Set node functional-20210310191609-6496 PodCIDR to [10.244.0.0/24]
	* I0310 19:18:32.870951       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-4cfgw"
	* E0310 19:18:32.946040       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	* E0310 19:18:32.959101       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	* I0310 19:18:32.960070       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-62r9g"
	* I0310 19:18:33.177068       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 19:18:33.180090       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-l9bb9"
	* I0310 19:18:33.277546       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 19:18:33.337478       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 19:18:33.337523       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 19:18:33.879768       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	* I0310 19:18:34.089700       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-4cfgw"
	* 
	* ==> kube-proxy [b384eab6bf96] <==
	* I0310 19:18:39.379890       1 node.go:172] Successfully retrieved node IP: 192.168.49.97
	* I0310 19:18:39.380392       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.97), assume IPv4 operation
	* W0310 19:18:39.563908       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 19:18:39.564100       1 server_others.go:185] Using iptables Proxier.
	* I0310 19:18:39.564841       1 server.go:650] Version: v1.20.2
	* I0310 19:18:39.565961       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 19:18:39.566749       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 19:18:39.566821       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 19:18:39.574907       1 config.go:315] Starting service config controller
	* I0310 19:18:39.574944       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 19:18:39.586846       1 config.go:224] Starting endpoint slice config controller
	* I0310 19:18:39.586873       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 19:18:39.587820       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 19:18:39.684897       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-scheduler [518bc4942e78] <==
	* I0310 19:18:13.084417       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E0310 19:18:13.156374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 19:18:13.157261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 19:18:13.160805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 19:18:13.163404       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 19:18:13.163705       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 19:18:13.164147       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 19:18:13.164321       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 19:18:13.164647       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 19:18:13.165218       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 19:18:13.166538       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 19:18:13.173926       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 19:18:13.178619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 19:18:14.001834       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 19:18:14.138633       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 19:18:14.152246       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 19:18:14.167336       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 19:18:14.192822       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 19:18:14.249864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 19:18:14.335365       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 19:18:14.391849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 19:18:14.398416       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 19:18:14.576190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 19:18:14.680176       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* I0310 19:18:16.685684       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 19:16:23 UTC, end at Wed 2021-03-10 19:21:29 UTC. --
	* Mar 10 19:18:33 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:33.458644    2970 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-jgzhw" (UniqueName: "kubernetes.io/secret/8158730c-c5c3-4b01-93d9-ebc43ef2189c-kube-proxy-token-jgzhw") pod "kube-proxy-l9bb9" (UID: "8158730c-c5c3-4b01-93d9-ebc43ef2189c")
	* Mar 10 19:18:33 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:33.458684    2970 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-hq2wh" (UniqueName: "kubernetes.io/secret/022268ac-67b5-4170-a85a-465abd0c06b3-coredns-token-hq2wh") pod "coredns-74ff55c5b-62r9g" (UID: "022268ac-67b5-4170-a85a-465abd0c06b3")
	* Mar 10 19:18:33 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:33.458717    2970 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-hq2wh" (UniqueName: "kubernetes.io/secret/097a3bfb-4152-4c57-9dbf-86b403bf6b58-coredns-token-hq2wh") pod "coredns-74ff55c5b-4cfgw" (UID: "097a3bfb-4152-4c57-9dbf-86b403bf6b58")
	* Mar 10 19:18:33 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:33.458753    2970 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/022268ac-67b5-4170-a85a-465abd0c06b3-config-volume") pod "coredns-74ff55c5b-62r9g" (UID: "022268ac-67b5-4170-a85a-465abd0c06b3")
	* Mar 10 19:18:33 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:33.458783    2970 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/097a3bfb-4152-4c57-9dbf-86b403bf6b58-config-volume") pod "coredns-74ff55c5b-4cfgw" (UID: "097a3bfb-4152-4c57-9dbf-86b403bf6b58")
	* Mar 10 19:18:34 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:34.337631    2970 reconciler.go:196] operationExecutor.UnmountVolume started for volume "coredns-token-hq2wh" (UniqueName: "kubernetes.io/secret/097a3bfb-4152-4c57-9dbf-86b403bf6b58-coredns-token-hq2wh") pod "097a3bfb-4152-4c57-9dbf-86b403bf6b58" (UID: "097a3bfb-4152-4c57-9dbf-86b403bf6b58")
	* Mar 10 19:18:34 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:34.337716    2970 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/097a3bfb-4152-4c57-9dbf-86b403bf6b58-config-volume") pod "097a3bfb-4152-4c57-9dbf-86b403bf6b58" (UID: "097a3bfb-4152-4c57-9dbf-86b403bf6b58")
	* Mar 10 19:18:34 functional-20210310191609-6496 kubelet[2970]: W0310 19:18:34.338213    2970 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/097a3bfb-4152-4c57-9dbf-86b403bf6b58/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	* Mar 10 19:18:34 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:34.348405    2970 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097a3bfb-4152-4c57-9dbf-86b403bf6b58-config-volume" (OuterVolumeSpecName: "config-volume") pod "097a3bfb-4152-4c57-9dbf-86b403bf6b58" (UID: "097a3bfb-4152-4c57-9dbf-86b403bf6b58"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	* Mar 10 19:18:34 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:34.438159    2970 reconciler.go:319] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/097a3bfb-4152-4c57-9dbf-86b403bf6b58-config-volume") on node "functional-20210310191609-6496" DevicePath ""
	* Mar 10 19:18:34 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:34.484492    2970 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097a3bfb-4152-4c57-9dbf-86b403bf6b58-coredns-token-hq2wh" (OuterVolumeSpecName: "coredns-token-hq2wh") pod "097a3bfb-4152-4c57-9dbf-86b403bf6b58" (UID: "097a3bfb-4152-4c57-9dbf-86b403bf6b58"). InnerVolumeSpecName "coredns-token-hq2wh". PluginName "kubernetes.io/secret", VolumeGidValue ""
	* Mar 10 19:18:34 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:34.544511    2970 reconciler.go:319] Volume detached for volume "coredns-token-hq2wh" (UniqueName: "kubernetes.io/secret/097a3bfb-4152-4c57-9dbf-86b403bf6b58-coredns-token-hq2wh") on node "functional-20210310191609-6496" DevicePath ""
	* Mar 10 19:18:38 functional-20210310191609-6496 kubelet[2970]: E0310 19:18:38.665045    2970 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-74ff55c5b-4cfgw": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: time="2021-03-10T19:18:37Z" level=fatal msg="no such file or directory": unknown
	* Mar 10 19:18:38 functional-20210310191609-6496 kubelet[2970]: E0310 19:18:38.665155    2970 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-4cfgw_kube-system(097a3bfb-4152-4c57-9dbf-86b403bf6b58)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-74ff55c5b-4cfgw": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: time="2021-03-10T19:18:37Z" level=fatal msg="no such file or directory": unknown
	* Mar 10 19:18:38 functional-20210310191609-6496 kubelet[2970]: E0310 19:18:38.665196    2970 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-4cfgw_kube-system(097a3bfb-4152-4c57-9dbf-86b403bf6b58)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-74ff55c5b-4cfgw": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: time="2021-03-10T19:18:37Z" level=fatal msg="no such file or directory": unknown
	* Mar 10 19:18:38 functional-20210310191609-6496 kubelet[2970]: E0310 19:18:38.665467    2970 pod_workers.go:191] Error syncing pod 097a3bfb-4152-4c57-9dbf-86b403bf6b58 ("coredns-74ff55c5b-4cfgw_kube-system(097a3bfb-4152-4c57-9dbf-86b403bf6b58)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-4cfgw_kube-system(097a3bfb-4152-4c57-9dbf-86b403bf6b58)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-4cfgw_kube-system(097a3bfb-4152-4c57-9dbf-86b403bf6b58)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"coredns-74ff55c5b-4cfgw\": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: time=\"2021-03-10T19:18:37Z\" level=fatal msg=\"no such file or directory\": unknown"
	* Mar 10 19:18:39 functional-20210310191609-6496 kubelet[2970]: W0310 19:18:39.236058    2970 pod_container_deletor.go:79] Container "a135c3a8b1c4b35aef1afd7c933589d4b693fbcd7d97013c5201d45e2800cebe" not found in pod's containers
	* Mar 10 19:18:39 functional-20210310191609-6496 kubelet[2970]: W0310 19:18:39.238195    2970 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-62r9g through plugin: invalid network status for
	* Mar 10 19:18:39 functional-20210310191609-6496 kubelet[2970]: W0310 19:18:39.253680    2970 pod_container_deletor.go:79] Container "264af7edaaccfb9bc7f0c8e1782e0e4c641a03a70cce2c67d6369ddfb18bbe03" not found in pod's containers
	* Mar 10 19:18:40 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:40.258811    2970 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 19:18:40 functional-20210310191609-6496 kubelet[2970]: W0310 19:18:40.285892    2970 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-62r9g through plugin: invalid network status for
	* Mar 10 19:18:40 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:40.480453    2970 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/b3c89307-430b-4b9e-bf19-ea94207564fe-tmp") pod "storage-provisioner" (UID: "b3c89307-430b-4b9e-bf19-ea94207564fe")
	* Mar 10 19:18:40 functional-20210310191609-6496 kubelet[2970]: I0310 19:18:40.483227    2970 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-9pl55" (UniqueName: "kubernetes.io/secret/b3c89307-430b-4b9e-bf19-ea94207564fe-storage-provisioner-token-9pl55") pod "storage-provisioner" (UID: "b3c89307-430b-4b9e-bf19-ea94207564fe")
	* Mar 10 19:18:42 functional-20210310191609-6496 kubelet[2970]: W0310 19:18:42.769771    2970 pod_container_deletor.go:79] Container "612593eab637b14655f6f4f7f15415e99aa65c606a966f5e643568cb4ee3a975" not found in pod's containers
	* Mar 10 19:18:43 functional-20210310191609-6496 kubelet[2970]: W0310 19:18:43.053969    2970 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-62r9g through plugin: invalid network status for
	* 
	* ==> storage-provisioner [42fe31721a7a] <==
	* I0310 19:18:47.879009       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 19:18:48.043918       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 19:18:48.044042       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 19:18:48.239503       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 19:18:48.240574       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ebd69c14-d580-42f3-81ac-15faea000e2e", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210310191609-6496_e8d847d4-5ebb-4578-8ab5-d60a689b4929 became leader
	* I0310 19:18:48.240639       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210310191609-6496_e8d847d4-5ebb-4578-8ab5-d60a689b4929!
	* I0310 19:18:48.344177       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_functional-20210310191609-6496_e8d847d4-5ebb-4578-8ab5-d60a689b4929!
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|--------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                           Args                           |            Profile             |          User           | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------------------------|--------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p      | addons-20210310190531-6496                               | addons-20210310190531-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:13:40 GMT | Wed, 10 Mar 2021 19:13:47 GMT |
	|         | addons disable ingress                                   |                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                   |                                |                         |         |                               |                               |
	| -p      | addons-20210310190531-6496                               | addons-20210310190531-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:14:07 GMT | Wed, 10 Mar 2021 19:14:09 GMT |
	|         | addons disable helm-tiller                               |                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                   |                                |                         |         |                               |                               |
	| -p      | addons-20210310190531-6496                               | addons-20210310190531-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:13:35 GMT | Wed, 10 Mar 2021 19:14:11 GMT |
	|         | addons disable gcp-auth                                  |                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                   |                                |                         |         |                               |                               |
	| -p      | addons-20210310190531-6496                               | addons-20210310190531-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:14:15 GMT | Wed, 10 Mar 2021 19:14:18 GMT |
	|         | addons disable metrics-server                            |                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                   |                                |                         |         |                               |                               |
	| -p      | addons-20210310190531-6496                               | addons-20210310190531-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:15:27 GMT | Wed, 10 Mar 2021 19:15:35 GMT |
	|         | addons disable                                           |                                |                         |         |                               |                               |
	|         | csi-hostpath-driver                                      |                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                   |                                |                         |         |                               |                               |
	| -p      | addons-20210310190531-6496                               | addons-20210310190531-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:15:36 GMT | Wed, 10 Mar 2021 19:15:39 GMT |
	|         | addons disable volumesnapshots                           |                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=1                                   |                                |                         |         |                               |                               |
	| stop    | -p addons-20210310190531-6496                            | addons-20210310190531-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:15:39 GMT | Wed, 10 Mar 2021 19:15:55 GMT |
	| addons  | enable dashboard -p                                      | addons-20210310190531-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:15:56 GMT | Wed, 10 Mar 2021 19:15:56 GMT |
	|         | addons-20210310190531-6496                               |                                |                         |         |                               |                               |
	| addons  | disable dashboard -p                                     | addons-20210310190531-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:15:57 GMT | Wed, 10 Mar 2021 19:15:57 GMT |
	|         | addons-20210310190531-6496                               |                                |                         |         |                               |                               |
	| delete  | -p addons-20210310190531-6496                            | addons-20210310190531-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:15:57 GMT | Wed, 10 Mar 2021 19:16:09 GMT |
	| start   | -p                                                       | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:16:10 GMT | Wed, 10 Mar 2021 19:20:04 GMT |
	|         | functional-20210310191609-6496                           |                                |                         |         |                               |                               |
	|         | --memory=4000                                            |                                |                         |         |                               |                               |
	|         | --apiserver-port=8441                                    |                                |                         |         |                               |                               |
	|         | --wait=all --driver=docker                               |                                |                         |         |                               |                               |
	| start   | -p                                                       | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:20:04 GMT | Wed, 10 Mar 2021 19:20:37 GMT |
	|         | functional-20210310191609-6496                           |                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=8                                   |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496                           | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:20:38 GMT | Wed, 10 Mar 2021 19:20:41 GMT |
	|         | cache add k8s.gcr.io/pause:3.1                           |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496                           | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:20:41 GMT | Wed, 10 Mar 2021 19:20:44 GMT |
	|         | cache add k8s.gcr.io/pause:3.3                           |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496                           | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:20:45 GMT | Wed, 10 Mar 2021 19:20:48 GMT |
	|         | cache add                                                |                                |                         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                  |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496 cache add                 | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:20:50 GMT | Wed, 10 Mar 2021 19:20:52 GMT |
	|         | minikube-local-cache-test:functional-20210310191609-6496 |                                |                         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.3                              | minikube                       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:20:52 GMT | Wed, 10 Mar 2021 19:20:52 GMT |
	| cache   | list                                                     | minikube                       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:20:53 GMT | Wed, 10 Mar 2021 19:20:53 GMT |
	| -p      | functional-20210310191609-6496                           | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:20:53 GMT | Wed, 10 Mar 2021 19:20:55 GMT |
	|         | ssh sudo crictl images                                   |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496                           | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:20:56 GMT | Wed, 10 Mar 2021 19:20:58 GMT |
	|         | ssh sudo docker rmi                                      |                                |                         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                  |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496                           | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:21:01 GMT | Wed, 10 Mar 2021 19:21:15 GMT |
	|         | cache reload                                             |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496                           | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:21:15 GMT | Wed, 10 Mar 2021 19:21:17 GMT |
	|         | ssh sudo crictl inspecti                                 |                                |                         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                  |                                |                         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.1                              | minikube                       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:21:18 GMT | Wed, 10 Mar 2021 19:21:18 GMT |
	| cache   | delete k8s.gcr.io/pause:latest                           | minikube                       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:21:18 GMT | Wed, 10 Mar 2021 19:21:18 GMT |
	| -p      | functional-20210310191609-6496                           | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:21:19 GMT | Wed, 10 Mar 2021 19:21:20 GMT |
	|         | kubectl -- --context                                     |                                |                         |         |                               |                               |
	|         | functional-20210310191609-6496                           |                                |                         |         |                               |                               |
	|         | get pods                                                 |                                |                         |         |                               |                               |
	|---------|----------------------------------------------------------|--------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 19:20:04
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 19:20:04.977044    1104 out.go:239] Setting OutFile to fd 2160 ...
	* I0310 19:20:04.978074    1104 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 19:20:04.978074    1104 out.go:252] Setting ErrFile to fd 1992...
	* I0310 19:20:04.978074    1104 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 19:20:04.989549    1104 out.go:246] Setting JSON to false
	* I0310 19:20:04.991977    1104 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":29470,"bootTime":1615374534,"procs":106,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 19:20:04.992266    1104 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 19:20:04.997778    1104 out.go:129] * [functional-20210310191609-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 19:20:05.001648    1104 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 19:20:05.002427    1104 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 19:20:05.489022    1104 docker.go:119] docker version: linux-20.10.2
	* I0310 19:20:05.504226    1104 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 19:20:06.332241    1104 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:51 SystemTime:2021-03-10 19:20:05.9591758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 19:20:06.337426    1104 out.go:129] * Using the docker driver based on existing profile
	* I0310 19:20:06.338216    1104 start.go:276] selected driver: docker
	* I0310 19:20:06.338216    1104 start.go:718] validating driver "docker" against &{Name:functional-20210310191609-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:functional-20210310191609-6496 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8441 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 19:20:06.338216    1104 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 19:20:06.361847    1104 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 19:20:07.136817    1104 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:51 SystemTime:2021-03-10 19:20:06.8039711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 19:20:09.016203    1104 start_flags.go:398] config:
	* {Name:functional-20210310191609-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:functional-20210310191609-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8441 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 19:20:09.020399    1104 out.go:129] * Starting control plane node functional-20210310191609-6496 in cluster functional-20210310191609-6496
	* I0310 19:20:09.535979    1104 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 19:20:09.536115    1104 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 19:20:09.536312    1104 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 19:20:09.536766    1104 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 19:20:09.536912    1104 cache.go:54] Caching tarball of preloaded images
	* I0310 19:20:09.537518    1104 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 19:20:09.537671    1104 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 19:20:09.538244    1104 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\config.json ...
	* I0310 19:20:09.552831    1104 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 19:20:09.553827    1104 start.go:313] acquiring machines lock for functional-20210310191609-6496: {Name:mke7475635a976438d8952f2c603ea6f287e68ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:09.553827    1104 start.go:317] acquired machines lock for "functional-20210310191609-6496" in 0s
	* I0310 19:20:09.553827    1104 start.go:93] Skipping create...Using existing machine configuration
	* I0310 19:20:09.553827    1104 fix.go:55] fixHost starting: 
	* I0310 19:20:09.571403    1104 cli_runner.go:115] Run: docker container inspect functional-20210310191609-6496 --format=
	* I0310 19:20:10.066132    1104 fix.go:108] recreateIfNeeded on functional-20210310191609-6496: state=Running err=<nil>
	* W0310 19:20:10.066132    1104 fix.go:134] unexpected machine state, will restart: <nil>
	* I0310 19:20:10.068396    1104 out.go:129] * Updating the running docker "functional-20210310191609-6496" container ...
	* I0310 19:20:10.068396    1104 machine.go:88] provisioning docker machine ...
	* I0310 19:20:10.068396    1104 ubuntu.go:169] provisioning hostname "functional-20210310191609-6496"
	* I0310 19:20:10.078803    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:10.606004    1104 main.go:121] libmachine: Using SSH client type: native
	* I0310 19:20:10.606885    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}
	* I0310 19:20:10.606885    1104 main.go:121] libmachine: About to run SSH command:
	* sudo hostname functional-20210310191609-6496 && echo "functional-20210310191609-6496" | sudo tee /etc/hostname
	* I0310 19:20:10.904569    1104 main.go:121] libmachine: SSH cmd err, output: <nil>: functional-20210310191609-6496
	* 
	* I0310 19:20:10.913181    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:11.407965    1104 main.go:121] libmachine: Using SSH client type: native
	* I0310 19:20:11.408232    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}
	* I0310 19:20:11.408504    1104 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\sfunctional-20210310191609-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20210310191609-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 functional-20210310191609-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 19:20:11.642393    1104 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 19:20:11.642714    1104 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 19:20:11.642714    1104 ubuntu.go:177] setting up certificates
	* I0310 19:20:11.642714    1104 provision.go:83] configureAuth start
	* I0310 19:20:11.652429    1104 cli_runner.go:115] Run: docker container inspect -f "" functional-20210310191609-6496
	* I0310 19:20:12.144453    1104 provision.go:137] copyHostCerts
	* I0310 19:20:12.144755    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> C:\Users\jenkins\.minikube/ca.pem
	* I0310 19:20:12.145125    1104 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 19:20:12.145696    1104 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 19:20:12.146019    1104 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 19:20:12.149019    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\cert.pem -> C:\Users\jenkins\.minikube/cert.pem
	* I0310 19:20:12.149263    1104 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 19:20:12.149263    1104 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 19:20:12.149902    1104 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 19:20:12.152373    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\key.pem -> C:\Users\jenkins\.minikube/key.pem
	* I0310 19:20:12.152665    1104 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 19:20:12.152665    1104 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 19:20:12.153268    1104 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 19:20:12.155718    1104 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.functional-20210310191609-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube functional-20210310191609-6496]
	* I0310 19:20:12.587229    1104 provision.go:165] copyRemoteCerts
	* I0310 19:20:12.598124    1104 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 19:20:12.606842    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:13.105393    1104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55009 SSHKeyPath:C:\Users\jenkins\.minikube\machines\functional-20210310191609-6496\id_rsa Username:docker}
	* I0310 19:20:13.270212    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	* I0310 19:20:13.271252    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 19:20:13.337574    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server.pem -> /etc/docker/server.pem
	* I0310 19:20:13.338304    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	* I0310 19:20:13.403554    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	* I0310 19:20:13.404066    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	* I0310 19:20:13.465283    1104 provision.go:86] duration metric: configureAuth took 1.822573s
	* I0310 19:20:13.465283    1104 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 19:20:13.473873    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:13.985292    1104 main.go:121] libmachine: Using SSH client type: native
	* I0310 19:20:13.985934    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}
	* I0310 19:20:13.985934    1104 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 19:20:14.227551    1104 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 19:20:14.227734    1104 ubuntu.go:71] root file system type: overlay
	* I0310 19:20:14.228750    1104 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 19:20:14.236745    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:14.750870    1104 main.go:121] libmachine: Using SSH client type: native
	* I0310 19:20:14.751198    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}
	* I0310 19:20:14.751769    1104 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 19:20:15.026188    1104 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 19:20:15.034198    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:15.545540    1104 main.go:121] libmachine: Using SSH client type: native
	* I0310 19:20:15.546112    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}
	* I0310 19:20:15.546263    1104 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 19:20:15.794520    1104 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 19:20:15.794765    1104 machine.go:91] provisioned docker machine in 5.7263811s
	* I0310 19:20:15.794765    1104 start.go:267] post-start starting for "functional-20210310191609-6496" (driver="docker")
	* I0310 19:20:15.794765    1104 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 19:20:15.803707    1104 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 19:20:15.812165    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:16.312010    1104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55009 SSHKeyPath:C:\Users\jenkins\.minikube\machines\functional-20210310191609-6496\id_rsa Username:docker}
	* I0310 19:20:16.478762    1104 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 19:20:16.495349    1104 command_runner.go:124] > NAME="Ubuntu"
	* I0310 19:20:16.495349    1104 command_runner.go:124] > VERSION="20.04.1 LTS (Focal Fossa)"
	* I0310 19:20:16.495490    1104 command_runner.go:124] > ID=ubuntu
	* I0310 19:20:16.495490    1104 command_runner.go:124] > ID_LIKE=debian
	* I0310 19:20:16.495490    1104 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* I0310 19:20:16.495490    1104 command_runner.go:124] > VERSION_ID="20.04"
	* I0310 19:20:16.495490    1104 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	* I0310 19:20:16.495490    1104 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	* I0310 19:20:16.495490    1104 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	* I0310 19:20:16.495490    1104 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	* I0310 19:20:16.495490    1104 command_runner.go:124] > VERSION_CODENAME=focal
	* I0310 19:20:16.495763    1104 command_runner.go:124] > UBUNTU_CODENAME=focal
	* I0310 19:20:16.496486    1104 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 19:20:16.496635    1104 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 19:20:16.496635    1104 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 19:20:16.496635    1104 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 19:20:16.496772    1104 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 19:20:16.497567    1104 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 19:20:16.500995    1104 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 19:20:16.501131    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> /etc/test/nested/copy/2512/hosts
	* I0310 19:20:16.502546    1104 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 19:20:16.502546    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> /etc/test/nested/copy/4452/hosts
	* I0310 19:20:16.504006    1104 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\6496\hosts -> hosts in /etc/test/nested/copy/6496
	* I0310 19:20:16.504006    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\6496\hosts -> /etc/test/nested/copy/6496/hosts
	* I0310 19:20:16.514459    1104 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452 /etc/test/nested/copy/6496
	* I0310 19:20:16.548987    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 19:20:16.621046    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 19:20:16.691847    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\6496\hosts --> /etc/test/nested/copy/6496/hosts (40 bytes)
	* I0310 19:20:16.752571    1104 start.go:270] post-start completed in 957.5362ms
	* I0310 19:20:16.766429    1104 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 19:20:16.772428    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:17.272746    1104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55009 SSHKeyPath:C:\Users\jenkins\.minikube\machines\functional-20210310191609-6496\id_rsa Username:docker}
	* I0310 19:20:17.426806    1104 command_runner.go:124] > 21%
	* I0310 19:20:17.426806    1104 fix.go:57] fixHost completed within 7.8729962s
	* I0310 19:20:17.426806    1104 start.go:80] releasing machines lock for "functional-20210310191609-6496", held for 7.8729962s
	* I0310 19:20:17.439558    1104 cli_runner.go:115] Run: docker container inspect -f "" functional-20210310191609-6496
	* I0310 19:20:17.952219    1104 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 19:20:17.960458    1104 ssh_runner.go:149] Run: systemctl --version
	* I0310 19:20:17.964152    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:17.967157    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:18.473693    1104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55009 SSHKeyPath:C:\Users\jenkins\.minikube\machines\functional-20210310191609-6496\id_rsa Username:docker}
	* I0310 19:20:18.489987    1104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55009 SSHKeyPath:C:\Users\jenkins\.minikube\machines\functional-20210310191609-6496\id_rsa Username:docker}
	* I0310 19:20:18.663254    1104 command_runner.go:124] > systemd 245 (245.4-4ubuntu3.4)
	* I0310 19:20:18.663397    1104 command_runner.go:124] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	* I0310 19:20:18.675866    1104 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 19:20:18.770451    1104 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	* I0310 19:20:18.770451    1104 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	* I0310 19:20:18.770451    1104 command_runner.go:124] > <H1>302 Moved</H1>
	* I0310 19:20:18.770451    1104 command_runner.go:124] > The document has moved
	* I0310 19:20:18.770451    1104 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	* I0310 19:20:18.770451    1104 command_runner.go:124] > </BODY></HTML>
	* I0310 19:20:18.784509    1104 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 19:20:18.830427    1104 command_runner.go:124] > # /lib/systemd/system/docker.service
	* I0310 19:20:18.831297    1104 command_runner.go:124] > [Unit]
	* I0310 19:20:18.831297    1104 command_runner.go:124] > Description=Docker Application Container Engine
	* I0310 19:20:18.831570    1104 command_runner.go:124] > Documentation=https://docs.docker.com
	* I0310 19:20:18.831570    1104 command_runner.go:124] > BindsTo=containerd.service
	* I0310 19:20:18.831570    1104 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	* I0310 19:20:18.831570    1104 command_runner.go:124] > Wants=network-online.target
	* I0310 19:20:18.831570    1104 command_runner.go:124] > Requires=docker.socket
	* I0310 19:20:18.831570    1104 command_runner.go:124] > StartLimitBurst=3
	* I0310 19:20:18.831898    1104 command_runner.go:124] > StartLimitIntervalSec=60
	* I0310 19:20:18.831898    1104 command_runner.go:124] > [Service]
	* I0310 19:20:18.832115    1104 command_runner.go:124] > Type=notify
	* I0310 19:20:18.832357    1104 command_runner.go:124] > Restart=on-failure
	* I0310 19:20:18.832357    1104 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* I0310 19:20:18.832940    1104 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* I0310 19:20:18.833215    1104 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	* I0310 19:20:18.833215    1104 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	* I0310 19:20:18.833215    1104 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* I0310 19:20:18.833215    1104 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	* I0310 19:20:18.833525    1104 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* I0310 19:20:18.833525    1104 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* I0310 19:20:18.833525    1104 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* I0310 19:20:18.833525    1104 command_runner.go:124] > ExecStart=
	* I0310 19:20:18.834306    1104 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* I0310 19:20:18.834451    1104 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	* I0310 19:20:18.834451    1104 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	* I0310 19:20:18.834451    1104 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	* I0310 19:20:18.834451    1104 command_runner.go:124] > LimitNOFILE=infinity
	* I0310 19:20:18.834451    1104 command_runner.go:124] > LimitNPROC=infinity
	* I0310 19:20:18.834451    1104 command_runner.go:124] > LimitCORE=infinity
	* I0310 19:20:18.834853    1104 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	* I0310 19:20:18.835222    1104 command_runner.go:124] > # Only systemd 226 and above support this version.
	* I0310 19:20:18.835987    1104 command_runner.go:124] > TasksMax=infinity
	* I0310 19:20:18.836349    1104 command_runner.go:124] > TimeoutStartSec=0
	* I0310 19:20:18.836517    1104 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	* I0310 19:20:18.836517    1104 command_runner.go:124] > Delegate=yes
	* I0310 19:20:18.836816    1104 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	* I0310 19:20:18.837138    1104 command_runner.go:124] > KillMode=process
	* I0310 19:20:18.837487    1104 command_runner.go:124] > [Install]
	* I0310 19:20:18.837895    1104 command_runner.go:124] > WantedBy=multi-user.target
	* I0310 19:20:18.838275    1104 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 19:20:18.848596    1104 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 19:20:18.913269    1104 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 19:20:18.970371    1104 command_runner.go:124] > runtime-endpoint: unix:///var/run/dockershim.sock
	* I0310 19:20:18.970371    1104 command_runner.go:124] > image-endpoint: unix:///var/run/dockershim.sock
	* I0310 19:20:18.980199    1104 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 19:20:19.016717    1104 command_runner.go:124] > # /lib/systemd/system/docker.service
	* I0310 19:20:19.016717    1104 command_runner.go:124] > [Unit]
	* I0310 19:20:19.016717    1104 command_runner.go:124] > Description=Docker Application Container Engine
	* I0310 19:20:19.016717    1104 command_runner.go:124] > Documentation=https://docs.docker.com
	* I0310 19:20:19.016717    1104 command_runner.go:124] > BindsTo=containerd.service
	* I0310 19:20:19.016717    1104 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	* I0310 19:20:19.016717    1104 command_runner.go:124] > Wants=network-online.target
	* I0310 19:20:19.016717    1104 command_runner.go:124] > Requires=docker.socket
	* I0310 19:20:19.016717    1104 command_runner.go:124] > StartLimitBurst=3
	* I0310 19:20:19.016717    1104 command_runner.go:124] > StartLimitIntervalSec=60
	* I0310 19:20:19.016717    1104 command_runner.go:124] > [Service]
	* I0310 19:20:19.016717    1104 command_runner.go:124] > Type=notify
	* I0310 19:20:19.016717    1104 command_runner.go:124] > Restart=on-failure
	* I0310 19:20:19.016966    1104 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* I0310 19:20:19.016966    1104 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* I0310 19:20:19.017145    1104 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	* I0310 19:20:19.017145    1104 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	* I0310 19:20:19.017145    1104 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* I0310 19:20:19.017145    1104 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	* I0310 19:20:19.017145    1104 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* I0310 19:20:19.017145    1104 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* I0310 19:20:19.017362    1104 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* I0310 19:20:19.017362    1104 command_runner.go:124] > ExecStart=
	* I0310 19:20:19.017362    1104 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* I0310 19:20:19.017570    1104 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	* I0310 19:20:19.018128    1104 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	* I0310 19:20:19.018128    1104 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	* I0310 19:20:19.018128    1104 command_runner.go:124] > LimitNOFILE=infinity
	* I0310 19:20:19.018128    1104 command_runner.go:124] > LimitNPROC=infinity
	* I0310 19:20:19.018128    1104 command_runner.go:124] > LimitCORE=infinity
	* I0310 19:20:19.018128    1104 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	* I0310 19:20:19.018393    1104 command_runner.go:124] > # Only systemd 226 and above support this version.
	* I0310 19:20:19.018393    1104 command_runner.go:124] > TasksMax=infinity
	* I0310 19:20:19.018574    1104 command_runner.go:124] > TimeoutStartSec=0
	* I0310 19:20:19.018725    1104 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	* I0310 19:20:19.018725    1104 command_runner.go:124] > Delegate=yes
	* I0310 19:20:19.018877    1104 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	* I0310 19:20:19.018877    1104 command_runner.go:124] > KillMode=process
	* I0310 19:20:19.019122    1104 command_runner.go:124] > [Install]
	* I0310 19:20:19.019369    1104 command_runner.go:124] > WantedBy=multi-user.target
	* I0310 19:20:19.037425    1104 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 19:20:19.316676    1104 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 19:20:19.361140    1104 ssh_runner.go:149] Run: docker version --format 
	* I0310 19:20:19.545676    1104 command_runner.go:124] > 20.10.3
	* I0310 19:20:19.565018    1104 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 19:20:19.573815    1104 cli_runner.go:115] Run: docker exec -t functional-20210310191609-6496 dig +short host.docker.internal
	* I0310 19:20:20.373054    1104 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 19:20:20.400710    1104 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 19:20:20.421391    1104 command_runner.go:124] > 192.168.65.2	host.minikube.internal
	* I0310 19:20:20.432844    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:20.928886    1104 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 19:20:20.929202    1104 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 19:20:20.939837    1104 ssh_runner.go:149] Run: docker images --format :
	* I0310 19:20:21.055004    1104 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.20.2
	* I0310 19:20:21.056050    1104 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.20.2
	* I0310 19:20:21.056050    1104 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.20.2
	* I0310 19:20:21.056050    1104 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.20.2
	* I0310 19:20:21.056050    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210105233232-2512
	* I0310 19:20:21.056050    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210106002159-6856
	* I0310 19:20:21.056050    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210106011107-6492
	* I0310 19:20:21.056050    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210106215525-1984
	* I0310 19:20:21.056050    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210107002220-9088
	* I0310 19:20:21.056050    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210107190945-8748
	* I0310 19:20:21.056050    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210112045103-7160
	* I0310 19:20:21.056350    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210114204234-6692
	* I0310 19:20:21.056350    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210115023213-8464
	* I0310 19:20:21.056350    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210115191024-3516
	* I0310 19:20:21.056350    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210119220838-6552
	* I0310 19:20:21.056350    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120022529-1140
	* I0310 19:20:21.056350    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120175851-7432
	* I0310 19:20:21.056350    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120214442-10992
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120231122-7024
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210123004019-5372
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210126212539-5172
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210128021318-232
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210212145109-352
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210213143925-7440
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210219145454-9520
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210219220622-3920
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210220004129-7452
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210224014800-800
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210225231842-5736
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210301195830-5700
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210303214129-4588
	* I0310 19:20:21.056594    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210304002630-1156
	* I0310 19:20:21.056832    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210304184021-4052
	* I0310 19:20:21.056832    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210306072141-12056
	* I0310 19:20:21.056832    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210308233820-5396
	* I0310 19:20:21.056957    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210309234032-4944
	* I0310 19:20:21.056957    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210310083645-5040
	* I0310 19:20:21.056957    1104 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	* I0310 19:20:21.056957    1104 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 19:20:21.056957    1104 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	* I0310 19:20:21.056957    1104 command_runner.go:124] > k8s.gcr.io/coredns:1.7.0
	* I0310 19:20:21.056957    1104 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	* I0310 19:20:21.056957    1104 command_runner.go:124] > k8s.gcr.io/pause:3.2
	* I0310 19:20:21.070861    1104 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* minikube-local-cache-test:functional-20210105233232-2512
	* minikube-local-cache-test:functional-20210106002159-6856
	* minikube-local-cache-test:functional-20210106011107-6492
	* minikube-local-cache-test:functional-20210106215525-1984
	* minikube-local-cache-test:functional-20210107002220-9088
	* minikube-local-cache-test:functional-20210107190945-8748
	* minikube-local-cache-test:functional-20210112045103-7160
	* minikube-local-cache-test:functional-20210114204234-6692
	* minikube-local-cache-test:functional-20210115023213-8464
	* minikube-local-cache-test:functional-20210115191024-3516
	* minikube-local-cache-test:functional-20210119220838-6552
	* minikube-local-cache-test:functional-20210120022529-1140
	* minikube-local-cache-test:functional-20210120175851-7432
	* minikube-local-cache-test:functional-20210120214442-10992
	* minikube-local-cache-test:functional-20210120231122-7024
	* minikube-local-cache-test:functional-20210123004019-5372
	* minikube-local-cache-test:functional-20210126212539-5172
	* minikube-local-cache-test:functional-20210128021318-232
	* minikube-local-cache-test:functional-20210212145109-352
	* minikube-local-cache-test:functional-20210213143925-7440
	* minikube-local-cache-test:functional-20210219145454-9520
	* minikube-local-cache-test:functional-20210219220622-3920
	* minikube-local-cache-test:functional-20210220004129-7452
	* minikube-local-cache-test:functional-20210224014800-800
	* minikube-local-cache-test:functional-20210225231842-5736
	* minikube-local-cache-test:functional-20210301195830-5700
	* minikube-local-cache-test:functional-20210303214129-4588
	* minikube-local-cache-test:functional-20210304002630-1156
	* minikube-local-cache-test:functional-20210304184021-4052
	* minikube-local-cache-test:functional-20210306072141-12056
	* minikube-local-cache-test:functional-20210308233820-5396
	* minikube-local-cache-test:functional-20210309234032-4944
	* minikube-local-cache-test:functional-20210310083645-5040
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 19:20:21.070861    1104 docker.go:360] Images already preloaded, skipping extraction
	* I0310 19:20:21.088262    1104 ssh_runner.go:149] Run: docker images --format :
	* I0310 19:20:21.248842    1104 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.20.2
	* I0310 19:20:21.248842    1104 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.20.2
	* I0310 19:20:21.248842    1104 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.20.2
	* I0310 19:20:21.248842    1104 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.20.2
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210105233232-2512
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210106002159-6856
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210106011107-6492
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210106215525-1984
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210107002220-9088
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210107190945-8748
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210112045103-7160
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210114204234-6692
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210115023213-8464
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210115191024-3516
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210119220838-6552
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120022529-1140
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120175851-7432
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120214442-10992
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120231122-7024
	* I0310 19:20:21.248842    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210123004019-5372
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210126212539-5172
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210128021318-232
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210212145109-352
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210213143925-7440
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210219145454-9520
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210219220622-3920
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210220004129-7452
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210224014800-800
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210225231842-5736
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210301195830-5700
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210303214129-4588
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210304002630-1156
	* I0310 19:20:21.249206    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210304184021-4052
	* I0310 19:20:21.249537    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210306072141-12056
	* I0310 19:20:21.249537    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210308233820-5396
	* I0310 19:20:21.249537    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210309234032-4944
	* I0310 19:20:21.249537    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210310083645-5040
	* I0310 19:20:21.249537    1104 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	* I0310 19:20:21.249537    1104 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 19:20:21.249537    1104 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	* I0310 19:20:21.249537    1104 command_runner.go:124] > k8s.gcr.io/coredns:1.7.0
	* I0310 19:20:21.249537    1104 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	* I0310 19:20:21.249537    1104 command_runner.go:124] > k8s.gcr.io/pause:3.2
	* I0310 19:20:21.249537    1104 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* minikube-local-cache-test:functional-20210105233232-2512
	* minikube-local-cache-test:functional-20210106002159-6856
	* minikube-local-cache-test:functional-20210106011107-6492
	* minikube-local-cache-test:functional-20210106215525-1984
	* minikube-local-cache-test:functional-20210107002220-9088
	* minikube-local-cache-test:functional-20210107190945-8748
	* minikube-local-cache-test:functional-20210112045103-7160
	* minikube-local-cache-test:functional-20210114204234-6692
	* minikube-local-cache-test:functional-20210115023213-8464
	* minikube-local-cache-test:functional-20210115191024-3516
	* minikube-local-cache-test:functional-20210119220838-6552
	* minikube-local-cache-test:functional-20210120022529-1140
	* minikube-local-cache-test:functional-20210120175851-7432
	* minikube-local-cache-test:functional-20210120214442-10992
	* minikube-local-cache-test:functional-20210120231122-7024
	* minikube-local-cache-test:functional-20210123004019-5372
	* minikube-local-cache-test:functional-20210126212539-5172
	* minikube-local-cache-test:functional-20210128021318-232
	* minikube-local-cache-test:functional-20210212145109-352
	* minikube-local-cache-test:functional-20210213143925-7440
	* minikube-local-cache-test:functional-20210219145454-9520
	* minikube-local-cache-test:functional-20210219220622-3920
	* minikube-local-cache-test:functional-20210220004129-7452
	* minikube-local-cache-test:functional-20210224014800-800
	* minikube-local-cache-test:functional-20210225231842-5736
	* minikube-local-cache-test:functional-20210301195830-5700
	* minikube-local-cache-test:functional-20210303214129-4588
	* minikube-local-cache-test:functional-20210304002630-1156
	* minikube-local-cache-test:functional-20210304184021-4052
	* minikube-local-cache-test:functional-20210306072141-12056
	* minikube-local-cache-test:functional-20210308233820-5396
	* minikube-local-cache-test:functional-20210309234032-4944
	* minikube-local-cache-test:functional-20210310083645-5040
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 19:20:21.249940    1104 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 19:20:21.260278    1104 ssh_runner.go:149] Run: docker info --format 
	* I0310 19:20:21.608817    1104 command_runner.go:124] > cgroupfs
	* I0310 19:20:21.624512    1104 cni.go:74] Creating CNI manager for ""
	* I0310 19:20:21.624998    1104 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 19:20:21.624998    1104 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 19:20:21.624998    1104 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.97 APIServerPort:8441 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20210310191609-6496 NodeName:functional-20210310191609-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.97 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 19:20:21.625239    1104 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 192.168.49.97
	*   bindPort: 8441
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "functional-20210310191609-6496"
	*   kubeletExtraArgs:
	*     node-ip: 192.168.49.97
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "192.168.49.97"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8441
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 19:20:21.625239    1104 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20210310191609-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.97
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:functional-20210310191609-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	* I0310 19:20:21.639660    1104 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 19:20:21.677653    1104 command_runner.go:124] > kubeadm
	* I0310 19:20:21.678844    1104 command_runner.go:124] > kubectl
	* I0310 19:20:21.678844    1104 command_runner.go:124] > kubelet
	* I0310 19:20:21.678844    1104 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 19:20:21.691571    1104 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 19:20:21.721058    1104 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	* I0310 19:20:21.780432    1104 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 19:20:21.828691    1104 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1865 bytes)
	* I0310 19:20:21.882859    1104 ssh_runner.go:149] Run: grep 192.168.49.97	control-plane.minikube.internal$ /etc/hosts
	* I0310 19:20:21.903793    1104 command_runner.go:124] > 192.168.49.97	control-plane.minikube.internal
	* I0310 19:20:21.904120    1104 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496 for IP: 192.168.49.97
	* I0310 19:20:21.904520    1104 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 19:20:21.904888    1104 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 19:20:21.905652    1104 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\client.key
	* I0310 19:20:21.906022    1104 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\apiserver.key.b6188fac
	* I0310 19:20:21.906022    1104 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\proxy-client.key
	* I0310 19:20:21.906022    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	* I0310 19:20:21.906407    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	* I0310 19:20:21.906407    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	* I0310 19:20:21.906717    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	* I0310 19:20:21.906717    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	* I0310 19:20:21.907025    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	* I0310 19:20:21.907025    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	* I0310 19:20:21.907025    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	* I0310 19:20:21.907711    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 19:20:21.908169    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.908169    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 19:20:21.908715    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.908715    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 19:20:21.908715    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.909286    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 19:20:21.909286    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.909286    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 19:20:21.909286    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.909286    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 19:20:21.910136    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.910593    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 19:20:21.910593    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.910952    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 19:20:21.911313    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.911313    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 19:20:21.911653    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.911653    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 19:20:21.912058    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.912393    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 19:20:21.912694    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.912694    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 19:20:21.913102    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.913102    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 19:20:21.913796    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.913796    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 19:20:21.914144    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.914144    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 19:20:21.914460    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.914765    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 19:20:21.915047    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.915192    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 19:20:21.915394    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.915394    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 19:20:21.915814    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.915814    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 19:20:21.916232    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.916232    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 19:20:21.916455    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.916699    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 19:20:21.917122    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.917246    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 19:20:21.917769    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.917942    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 19:20:21.918361    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.918361    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 19:20:21.918592    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.918794    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 19:20:21.919102    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.919102    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 19:20:21.919428    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.919428    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 19:20:21.919827    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.919827    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 19:20:21.920187    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.920187    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 19:20:21.920498    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.923393    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 19:20:21.923393    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.923393    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 19:20:21.924302    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.924302    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 19:20:21.924302    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.924302    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 19:20:21.925285    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.925285    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 19:20:21.925285    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.925285    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 19:20:21.925285    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.925285    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 19:20:21.926275    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.926275    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 19:20:21.926716    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.926716    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 19:20:21.927276    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.927276    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 19:20:21.927276    1104 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 19:20:21.927276    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 19:20:21.927276    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 19:20:21.927276    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 19:20:21.928278    1104 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 19:20:21.928278    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1476.pem -> /usr/share/ca-certificates/1476.pem
	* I0310 19:20:21.928278    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\232.pem -> /usr/share/ca-certificates/232.pem
	* I0310 19:20:21.928278    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7160.pem -> /usr/share/ca-certificates/7160.pem
	* I0310 19:20:21.929277    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6856.pem -> /usr/share/ca-certificates/6856.pem
	* I0310 19:20:21.929277    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4052.pem -> /usr/share/ca-certificates/4052.pem
	* I0310 19:20:21.929277    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7432.pem -> /usr/share/ca-certificates/7432.pem
	* I0310 19:20:21.929277    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\2512.pem -> /usr/share/ca-certificates/2512.pem
	* I0310 19:20:21.929277    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4944.pem -> /usr/share/ca-certificates/4944.pem
	* I0310 19:20:21.930279    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6492.pem -> /usr/share/ca-certificates/6492.pem
	* I0310 19:20:21.930279    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1728.pem -> /usr/share/ca-certificates/1728.pem
	* I0310 19:20:21.930279    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5396.pem -> /usr/share/ca-certificates/5396.pem
	* I0310 19:20:21.930279    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1984.pem -> /usr/share/ca-certificates/1984.pem
	* I0310 19:20:21.930279    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6496.pem -> /usr/share/ca-certificates/6496.pem
	* I0310 19:20:21.930279    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5172.pem -> /usr/share/ca-certificates/5172.pem
	* I0310 19:20:21.931280    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\800.pem -> /usr/share/ca-certificates/800.pem
	* I0310 19:20:21.931280    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1140.pem -> /usr/share/ca-certificates/1140.pem
	* I0310 19:20:21.931280    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3056.pem -> /usr/share/ca-certificates/3056.pem
	* I0310 19:20:21.931280    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7024.pem -> /usr/share/ca-certificates/7024.pem
	* I0310 19:20:21.931280    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8464.pem -> /usr/share/ca-certificates/8464.pem
	* I0310 19:20:21.931280    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1156.pem -> /usr/share/ca-certificates/1156.pem
	* I0310 19:20:21.932276    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8748.pem -> /usr/share/ca-certificates/8748.pem
	* I0310 19:20:21.932276    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5736.pem -> /usr/share/ca-certificates/5736.pem
	* I0310 19:20:21.932276    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5040.pem -> /usr/share/ca-certificates/5040.pem
	* I0310 19:20:21.932276    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7452.pem -> /usr/share/ca-certificates/7452.pem
	* I0310 19:20:21.932276    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9088.pem -> /usr/share/ca-certificates/9088.pem
	* I0310 19:20:21.933282    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\352.pem -> /usr/share/ca-certificates/352.pem
	* I0310 19:20:21.933282    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5372.pem -> /usr/share/ca-certificates/5372.pem
	* I0310 19:20:21.933282    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\12056.pem -> /usr/share/ca-certificates/12056.pem
	* I0310 19:20:21.933282    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6692.pem -> /usr/share/ca-certificates/6692.pem
	* I0310 19:20:21.933282    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5700.pem -> /usr/share/ca-certificates/5700.pem
	* I0310 19:20:21.934307    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4588.pem -> /usr/share/ca-certificates/4588.pem
	* I0310 19:20:21.934307    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6552.pem -> /usr/share/ca-certificates/6552.pem
	* I0310 19:20:21.934307    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	* I0310 19:20:21.934307    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7440.pem -> /usr/share/ca-certificates/7440.pem
	* I0310 19:20:21.934307    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9520.pem -> /usr/share/ca-certificates/9520.pem
	* I0310 19:20:21.935289    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4452.pem -> /usr/share/ca-certificates/4452.pem
	* I0310 19:20:21.935289    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6368.pem -> /usr/share/ca-certificates/6368.pem
	* I0310 19:20:21.935289    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3516.pem -> /usr/share/ca-certificates/3516.pem
	* I0310 19:20:21.935289    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\10992.pem -> /usr/share/ca-certificates/10992.pem
	* I0310 19:20:21.935289    1104 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3920.pem -> /usr/share/ca-certificates/3920.pem
	* I0310 19:20:21.937301    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 19:20:21.995720    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	* I0310 19:20:22.060897    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 19:20:22.120935    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\functional-20210310191609-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	* I0310 19:20:22.179892    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 19:20:22.236248    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 19:20:22.299393    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 19:20:22.352841    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 19:20:22.412686    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 19:20:22.468422    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 19:20:22.542222    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 19:20:22.620542    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 19:20:22.691302    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 19:20:22.752394    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 19:20:22.808573    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 19:20:22.875514    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 19:20:22.934498    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 19:20:22.994812    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 19:20:23.066401    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 19:20:23.132178    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 19:20:23.198412    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 19:20:23.267784    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 19:20:23.328608    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 19:20:23.403685    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 19:20:23.466901    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 19:20:23.536201    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 19:20:23.605961    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 19:20:23.673162    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 19:20:23.735744    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 19:20:23.794787    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 19:20:23.853493    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 19:20:23.915910    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 19:20:23.983030    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 19:20:24.040956    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 19:20:24.118802    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 19:20:24.182235    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 19:20:24.240965    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 19:20:24.298796    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 19:20:24.357486    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 19:20:24.415807    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 19:20:24.544908    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 19:20:24.654715    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 19:20:24.717019    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 19:20:24.775487    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 19:20:24.833761    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 19:20:24.908928    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 19:20:24.968235    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 19:20:25.031567    1104 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 19:20:25.097741    1104 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 19:20:25.151498    1104 ssh_runner.go:149] Run: openssl version
	* I0310 19:20:25.175504    1104 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	* I0310 19:20:25.189546    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 19:20:25.229407    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 19:20:25.251852    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 19:20:25.252635    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 19:20:25.263657    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 19:20:25.285246    1104 command_runner.go:124] > 51391683
	* I0310 19:20:25.296306    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:25.340153    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 19:20:25.377841    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 19:20:25.393939    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 19:20:25.394972    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 19:20:25.405177    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 19:20:25.426265    1104 command_runner.go:124] > 51391683
	* I0310 19:20:25.437499    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:25.488218    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 19:20:25.551417    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 19:20:25.573561    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 19:20:25.575125    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 19:20:25.585015    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 19:20:25.621191    1104 command_runner.go:124] > 51391683
	* I0310 19:20:25.632359    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:25.683366    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 19:20:25.752046    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 19:20:25.773323    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 19:20:25.773323    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 19:20:25.786112    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 19:20:25.809792    1104 command_runner.go:124] > 51391683
	* I0310 19:20:25.821014    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:25.866630    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 19:20:25.923073    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 19:20:25.939673    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 19:20:25.939673    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 19:20:25.951785    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 19:20:25.978374    1104 command_runner.go:124] > 51391683
	* I0310 19:20:25.989667    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:26.033911    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 19:20:26.086233    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 19:20:26.107021    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 19:20:26.107928    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 19:20:26.128638    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 19:20:26.157297    1104 command_runner.go:124] > 51391683
	* I0310 19:20:26.175789    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:26.214625    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 19:20:26.266061    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 19:20:26.286200    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 19:20:26.286354    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 19:20:26.306992    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 19:20:26.330020    1104 command_runner.go:124] > 51391683
	* I0310 19:20:26.340210    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:26.387361    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 19:20:26.437147    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 19:20:26.459569    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 19:20:26.460726    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 19:20:26.472388    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 19:20:26.495932    1104 command_runner.go:124] > 51391683
	* I0310 19:20:26.509702    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:26.555535    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 19:20:26.596033    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 19:20:26.614604    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 19:20:26.616909    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 19:20:26.626667    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 19:20:26.646981    1104 command_runner.go:124] > 51391683
	* I0310 19:20:26.661159    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:26.717078    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 19:20:26.768313    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 19:20:26.786182    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 19:20:26.786731    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 19:20:26.801199    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 19:20:26.823913    1104 command_runner.go:124] > 51391683
	* I0310 19:20:26.842434    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:26.896342    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 19:20:26.954628    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 19:20:26.974568    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 19:20:26.975465    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 19:20:26.991618    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 19:20:27.017830    1104 command_runner.go:124] > b5213941
	* I0310 19:20:27.029166    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 19:20:27.074705    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 19:20:27.123796    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 19:20:27.135987    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 19:20:27.141012    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 19:20:27.151569    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 19:20:27.173876    1104 command_runner.go:124] > 51391683
	* I0310 19:20:27.193347    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:27.242291    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 19:20:27.290296    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 19:20:27.312265    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 19:20:27.312718    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 19:20:27.323668    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 19:20:27.348268    1104 command_runner.go:124] > 51391683
	* I0310 19:20:27.361304    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:27.409569    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 19:20:27.451606    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 19:20:27.469144    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 19:20:27.470029    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 19:20:27.479204    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 19:20:27.502293    1104 command_runner.go:124] > 51391683
	* I0310 19:20:27.514638    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:27.568644    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 19:20:27.620166    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 19:20:27.643629    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 19:20:27.643629    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 19:20:27.656214    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 19:20:27.678949    1104 command_runner.go:124] > 51391683
	* I0310 19:20:27.690972    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:27.732236    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 19:20:27.783413    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 19:20:27.805703    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 19:20:27.806648    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 19:20:27.833422    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 19:20:27.858581    1104 command_runner.go:124] > 51391683
	* I0310 19:20:27.870339    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:27.925146    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 19:20:27.973890    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 19:20:27.992859    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 19:20:27.993474    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 19:20:28.002264    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 19:20:28.026385    1104 command_runner.go:124] > 51391683
	* I0310 19:20:28.037860    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:28.091453    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 19:20:28.142708    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 19:20:28.177492    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 19:20:28.177622    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 19:20:28.204326    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 19:20:28.227291    1104 command_runner.go:124] > 51391683
	* I0310 19:20:28.249477    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:28.291846    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 19:20:28.334198    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 19:20:28.355492    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 19:20:28.355492    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 19:20:28.366532    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 19:20:28.389486    1104 command_runner.go:124] > 51391683
	* I0310 19:20:28.400690    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:28.454554    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 19:20:28.503716    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 19:20:28.527445    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 19:20:28.527681    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 19:20:28.537418    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 19:20:28.564102    1104 command_runner.go:124] > 51391683
	* I0310 19:20:28.580229    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:28.630430    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 19:20:28.682446    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 19:20:28.698194    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 19:20:28.699181    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 19:20:28.709712    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 19:20:28.728457    1104 command_runner.go:124] > 51391683
	* I0310 19:20:28.740450    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:28.781219    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 19:20:28.827657    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 19:20:28.846836    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 19:20:28.847638    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 19:20:28.859607    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 19:20:28.886613    1104 command_runner.go:124] > 51391683
	* I0310 19:20:28.901694    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:28.942402    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 19:20:29.003667    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 19:20:29.027623    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 19:20:29.027839    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 19:20:29.046795    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 19:20:29.075348    1104 command_runner.go:124] > 51391683
	* I0310 19:20:29.088441    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:29.134239    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 19:20:29.174637    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 19:20:29.193375    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 19:20:29.193841    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 19:20:29.205760    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 19:20:29.231037    1104 command_runner.go:124] > 51391683
	* I0310 19:20:29.252734    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:29.303636    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 19:20:29.351243    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 19:20:29.372162    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 19:20:29.372427    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 19:20:29.384593    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 19:20:29.408389    1104 command_runner.go:124] > 51391683
	* I0310 19:20:29.425332    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:29.478631    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 19:20:29.542904    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 19:20:29.564288    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 19:20:29.564288    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 19:20:29.576178    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 19:20:29.600122    1104 command_runner.go:124] > 51391683
	* I0310 19:20:29.614978    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:29.674630    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 19:20:29.719475    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 19:20:29.735508    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 19:20:29.735508    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 19:20:29.746109    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 19:20:29.769796    1104 command_runner.go:124] > 51391683
	* I0310 19:20:29.779896    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:29.837217    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 19:20:29.895786    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 19:20:29.914731    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 19:20:29.914731    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 19:20:29.928920    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 19:20:29.953570    1104 command_runner.go:124] > 51391683
	* I0310 19:20:29.965474    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:30.007339    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 19:20:30.051845    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 19:20:30.075673    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 19:20:30.075673    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 19:20:30.088993    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 19:20:30.111454    1104 command_runner.go:124] > 51391683
	* I0310 19:20:30.124929    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:30.178848    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 19:20:30.222675    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 19:20:30.243526    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 19:20:30.243526    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 19:20:30.257157    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 19:20:30.281586    1104 command_runner.go:124] > 51391683
	* I0310 19:20:30.293996    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:30.348930    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 19:20:30.396524    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 19:20:30.417381    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 19:20:30.418535    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 19:20:30.428140    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 19:20:30.452115    1104 command_runner.go:124] > 51391683
	* I0310 19:20:30.469148    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:30.510438    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 19:20:30.581399    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 19:20:30.601839    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 19:20:30.601839    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 19:20:30.614102    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 19:20:30.635931    1104 command_runner.go:124] > 51391683
	* I0310 19:20:30.650845    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:30.701933    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 19:20:30.743686    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 19:20:30.769565    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 19:20:30.770002    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 19:20:30.791407    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 19:20:30.819980    1104 command_runner.go:124] > 51391683
	* I0310 19:20:30.828676    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:30.880910    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 19:20:30.932724    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 19:20:30.955559    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 19:20:30.955698    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 19:20:30.965938    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 19:20:30.991249    1104 command_runner.go:124] > 51391683
	* I0310 19:20:31.003951    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:31.047848    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 19:20:31.101941    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 19:20:31.121640    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 19:20:31.121640    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 19:20:31.130668    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 19:20:31.152695    1104 command_runner.go:124] > 51391683
	* I0310 19:20:31.172946    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:31.224611    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 19:20:31.283438    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 19:20:31.304439    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 19:20:31.305213    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 19:20:31.316626    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 19:20:31.349202    1104 command_runner.go:124] > 51391683
	* I0310 19:20:31.356662    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:31.409375    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 19:20:31.455512    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 19:20:31.476820    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 19:20:31.476820    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 19:20:31.492373    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 19:20:31.516038    1104 command_runner.go:124] > 51391683
	* I0310 19:20:31.523436    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:31.569946    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 19:20:31.614400    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 19:20:31.634783    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 19:20:31.636579    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 19:20:31.656495    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 19:20:31.680690    1104 command_runner.go:124] > 51391683
	* I0310 19:20:31.690595    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:31.728406    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 19:20:31.767329    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 19:20:31.784553    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 19:20:31.785124    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 19:20:31.796392    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 19:20:31.820220    1104 command_runner.go:124] > 51391683
	* I0310 19:20:31.834945    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:31.892760    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 19:20:31.938226    1104 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 19:20:31.959499    1104 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 19:20:31.959499    1104 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 19:20:31.970637    1104 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 19:20:31.991624    1104 command_runner.go:124] > 51391683
	* I0310 19:20:32.004849    1104 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 19:20:32.042038    1104 kubeadm.go:385] StartCluster: {Name:functional-20210310191609-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:functional-20210310191609-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8441 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 19:20:32.049297    1104 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 19:20:32.194599    1104 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 19:20:32.225317    1104 command_runner.go:124] > /var/lib/kubelet/config.yaml
	* I0310 19:20:32.225317    1104 command_runner.go:124] > /var/lib/kubelet/kubeadm-flags.env
	* I0310 19:20:32.225317    1104 command_runner.go:124] > /var/lib/minikube/etcd:
	* I0310 19:20:32.225317    1104 command_runner.go:124] > member
	* I0310 19:20:32.225317    1104 kubeadm.go:396] found existing configuration files, will attempt cluster restart
	* I0310 19:20:32.226709    1104 kubeadm.go:594] restartCluster start
	* I0310 19:20:32.238138    1104 ssh_runner.go:149] Run: sudo test -d /data/minikube
	* I0310 19:20:32.269069    1104 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 19:20:32.285052    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:32.784719    1104 kubeconfig.go:93] found "functional-20210310191609-6496" server: "https://127.0.0.1:55006"
	* I0310 19:20:32.788216    1104 kapi.go:59] client config for functional-20210310191609-6496: &rest.Config{Host:"https://127.0.0.1:55006", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\functional-20210310191609-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\functional-20210310191609-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", Disa
bleCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e81020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	* I0310 19:20:32.824078    1104 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	* I0310 19:20:32.856891    1104 api_server.go:146] Checking apiserver status ...
	* I0310 19:20:32.868278    1104 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 19:20:32.956567    1104 command_runner.go:124] > 2613
	* I0310 19:20:32.968758    1104 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2613/cgroup
	* I0310 19:20:33.003381    1104 command_runner.go:124] > 7:freezer:/docker/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/kubepods/burstable/pode8037bc9c9e790859c2781e18eb47f05/3ef151b16cbf4ad453e799eb810ca3381c9116ca769d348e7a6a7923cd4a2e1d
	* I0310 19:20:33.007748    1104 api_server.go:162] apiserver freezer: "7:freezer:/docker/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/kubepods/burstable/pode8037bc9c9e790859c2781e18eb47f05/3ef151b16cbf4ad453e799eb810ca3381c9116ca769d348e7a6a7923cd4a2e1d"
	* I0310 19:20:33.023681    1104 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/kubepods/burstable/pode8037bc9c9e790859c2781e18eb47f05/3ef151b16cbf4ad453e799eb810ca3381c9116ca769d348e7a6a7923cd4a2e1d/freezer.state
	* I0310 19:20:33.056574    1104 command_runner.go:124] > THAWED
	* I0310 19:20:33.057617    1104 api_server.go:184] freezer state: "THAWED"
	* I0310 19:20:33.057762    1104 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55006/healthz ...
	* I0310 19:20:33.088882    1104 api_server.go:241] https://127.0.0.1:55006/healthz returned 200:
	* ok
	* I0310 19:20:33.142257    1104 system_pods.go:84] 7 kube-system pods found
	* I0310 19:20:33.142257    1104 system_pods.go:87] "coredns-74ff55c5b-62r9g" [022268ac-67b5-4170-a85a-465abd0c06b3] Running
	* I0310 19:20:33.142476    1104 system_pods.go:87] "etcd-functional-20210310191609-6496" [8cd656a0-0ce1-4093-8b84-f741cdac3d9c] Running
	* I0310 19:20:33.142476    1104 system_pods.go:87] "kube-apiserver-functional-20210310191609-6496" [e016d4fc-2c83-4884-880e-6dc489c823b2] Running
	* I0310 19:20:33.142476    1104 system_pods.go:87] "kube-controller-manager-functional-20210310191609-6496" [c6b12efd-d118-4e47-bf42-80bdaf433269] Running
	* I0310 19:20:33.142476    1104 system_pods.go:87] "kube-proxy-l9bb9" [8158730c-c5c3-4b01-93d9-ebc43ef2189c] Running
	* I0310 19:20:33.142476    1104 system_pods.go:87] "kube-scheduler-functional-20210310191609-6496" [6f7854fa-e162-4728-a3be-6ad9fd54e756] Running
	* I0310 19:20:33.142476    1104 system_pods.go:87] "storage-provisioner" [b3c89307-430b-4b9e-bf19-ea94207564fe] Running
	* I0310 19:20:33.147601    1104 api_server.go:137] control plane version: v1.20.2
	* I0310 19:20:33.148267    1104 kubeadm.go:588] The running cluster does not require reconfiguration: 127.0.0.1
	* I0310 19:20:33.148267    1104 kubeadm.go:641] Taking a shortcut, as the cluster seems to be properly configured
	* I0310 19:20:33.148267    1104 kubeadm.go:598] restartCluster took 921.5597ms
	* I0310 19:20:33.148267    1104 kubeadm.go:387] StartCluster complete in 1.1062304s
	* I0310 19:20:33.148267    1104 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 19:20:33.148624    1104 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	* I0310 19:20:33.150393    1104 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 19:20:33.169445    1104 kapi.go:59] client config for functional-20210310191609-6496: &rest.Config{Host:"https://127.0.0.1:55006", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\functional-20210310191609-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\functional-20210310191609-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", Disa
bleCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e81020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	* I0310 19:20:33.200528    1104 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "functional-20210310191609-6496" rescaled to 1
	* I0310 19:20:33.200743    1104 start.go:203] Will wait 6m0s for node up to 
	* I0310 19:20:33.202665    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 19:20:33.205767    1104 out.go:129] * Verifying Kubernetes components...
	* I0310 19:20:33.202665    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 19:20:33.202837    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* I0310 19:20:33.202981    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 19:20:33.202981    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 19:20:33.202981    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 19:20:33.202981    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 19:20:33.202981    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 19:20:33.202981    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 19:20:33.202981    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 19:20:33.202981    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 19:20:33.202981    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 19:20:33.202981    1104 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 19:20:33.203154    1104 addons.go:381] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	* I0310 19:20:33.243956    1104 addons.go:58] Setting storage-provisioner=true in profile "functional-20210310191609-6496"
	* I0310 19:20:33.244170    1104 addons.go:134] Setting addon storage-provisioner=true in "functional-20210310191609-6496"
	* W0310 19:20:33.244170    1104 addons.go:143] addon storage-provisioner should already be in state true
	* I0310 19:20:33.246490    1104 addons.go:58] Setting default-storageclass=true in profile "functional-20210310191609-6496"
	* I0310 19:20:33.246701    1104 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "functional-20210310191609-6496"
	* I0310 19:20:33.251164    1104 host.go:66] Checking if "functional-20210310191609-6496" exists ...
	* I0310 19:20:33.266087    1104 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	* I0310 19:20:33.290626    1104 cli_runner.go:115] Run: docker container inspect functional-20210310191609-6496 --format=
	* I0310 19:20:33.294538    1104 cli_runner.go:115] Run: docker container inspect functional-20210310191609-6496 --format=
	* I0310 19:20:33.344195    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:33.800374    1104 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:33.802916    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	* I0310 19:20:33.802916    1104 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:33.803440    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	* I0310 19:20:33.803440    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 597.5166ms
	* I0310 19:20:33.817382    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	* I0310 19:20:33.822394    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 616.6282ms
	* I0310 19:20:33.822394    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	* I0310 19:20:33.902400    1104 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:33.902400    1104 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:33.903414    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	* I0310 19:20:33.904403    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 698.1821ms
	* I0310 19:20:33.904403    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	* I0310 19:20:33.904403    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	* I0310 19:20:33.905396    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 692.5652ms
	* I0310 19:20:33.905396    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	* I0310 19:20:33.923418    1104 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:33.926459    1104 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:33.926459    1104 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:33.928431    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	* I0310 19:20:33.928431    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	* I0310 19:20:33.929389    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	* I0310 19:20:33.929389    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 720.0527ms
	* I0310 19:20:33.929389    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	* I0310 19:20:33.932360    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 723.8466ms
	* I0310 19:20:33.932360    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	* I0310 19:20:33.933467    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 726.2546ms
	* I0310 19:20:33.933467    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	* I0310 19:20:33.933467    1104 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:33.934407    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	* I0310 19:20:33.936414    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 723.7678ms
	* I0310 19:20:33.936414    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	* I0310 19:20:33.968168    1104 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:33.968385    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	* I0310 19:20:33.971028    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 763.0724ms
	* I0310 19:20:33.971028    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	* I0310 19:20:33.980769    1104 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:33.981531    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	* I0310 19:20:33.985182    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 776.6688ms
	* I0310 19:20:33.985182    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	* I0310 19:20:34.002628    1104 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.006537    1104 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.006943    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	* I0310 19:20:34.007420    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	* I0310 19:20:34.006537    1104 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.009425    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	* I0310 19:20:34.009425    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 799.2545ms
	* I0310 19:20:34.009425    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	* I0310 19:20:34.009425    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 788.1647ms
	* I0310 19:20:34.009425    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	* I0310 19:20:34.009425    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 786.6329ms
	* I0310 19:20:34.009425    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	* I0310 19:20:34.084256    1104 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.084598    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	* I0310 19:20:34.085009    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 876.4955ms
	* I0310 19:20:34.085009    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	* I0310 19:20:34.092367    1104 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.093235    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	* I0310 19:20:34.093844    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 880.4047ms
	* I0310 19:20:34.093844    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	* I0310 19:20:34.115154    1104 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.115738    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	* I0310 19:20:34.115738    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 892.9463ms
	* I0310 19:20:34.115738    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	* I0310 19:20:34.129069    1104 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.129588    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	* I0310 19:20:34.130033    1104 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.130179    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 923.3465ms
	* I0310 19:20:34.130179    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	* I0310 19:20:34.130179    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	* I0310 19:20:34.130179    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 921.6663ms
	* I0310 19:20:34.130724    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	* I0310 19:20:34.132073    1104 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.132470    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	* I0310 19:20:34.133103    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 910.0794ms
	* I0310 19:20:34.133103    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	* I0310 19:20:34.143530    1104 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.144187    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	* I0310 19:20:34.144838    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 918.5085ms
	* I0310 19:20:34.145544    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	* I0310 19:20:34.152826    1104 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.153282    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	* I0310 19:20:34.153700    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 938.3364ms
	* I0310 19:20:34.153826    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	* I0310 19:20:34.166603    1104 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.167414    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	* I0310 19:20:34.167711    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 935.8106ms
	* I0310 19:20:34.167711    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	* I0310 19:20:34.184448    1104 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.184921    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	* I0310 19:20:34.185153    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 961.6512ms
	* I0310 19:20:34.185634    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	* I0310 19:20:34.189880    1104 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.191793    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	* I0310 19:20:34.191793    1104 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.191793    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	* I0310 19:20:34.191793    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 968.7698ms
	* I0310 19:20:34.192509    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	* I0310 19:20:34.192509    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 966.1799ms
	* I0310 19:20:34.192509    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	* I0310 19:20:34.199599    1104 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.200303    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	* I0310 19:20:34.200635    1104 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.200991    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	* I0310 19:20:34.201235    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 965.539ms
	* I0310 19:20:34.201235    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	* I0310 19:20:34.201235    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 972.896ms
	* I0310 19:20:34.201235    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	* I0310 19:20:34.216592    1104 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.216592    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	* I0310 19:20:34.216592    1104 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.217124    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 979.0532ms
	* I0310 19:20:34.217124    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	* I0310 19:20:34.217739    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	* I0310 19:20:34.217739    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 978.2103ms
	* I0310 19:20:34.217739    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	* I0310 19:20:34.224618    1104 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.224618    1104 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.225352    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	* I0310 19:20:34.225654    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 987.0142ms
	* I0310 19:20:34.226116    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	* I0310 19:20:34.225786    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	* I0310 19:20:34.226590    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 983.5483ms
	* I0310 19:20:34.226590    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	* I0310 19:20:34.228582    1104 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.229974    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	* I0310 19:20:34.229974    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 995.9658ms
	* I0310 19:20:34.230216    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	* I0310 19:20:34.232723    1104 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:20:34.233690    1104 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	* I0310 19:20:34.234397    1104 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 990.9092ms
	* I0310 19:20:34.234397    1104 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	* I0310 19:20:34.235077    1104 cache.go:73] Successfully saved all images to host disk.
	* I0310 19:20:34.274659    1104 cli_runner.go:115] Run: docker container inspect functional-20210310191609-6496 --format=
	* I0310 19:20:34.654173    1104 cli_runner.go:168] Completed: docker container inspect functional-20210310191609-6496 --format=: (1.3635493s)
	* I0310 19:20:34.658123    1104 kapi.go:59] client config for functional-20210310191609-6496: &rest.Config{Host:"https://127.0.0.1:55006", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\functional-20210310191609-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\functional-20210310191609-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", Disa
bleCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e81020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	* I0310 19:20:34.699559    1104 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496: (1.3550889s)
	* I0310 19:20:34.699821    1104 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	* I0310 19:20:34.699821    1104 pod_ready.go:59] waiting 6m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	* I0310 19:20:34.716270    1104 cli_runner.go:168] Completed: docker container inspect functional-20210310191609-6496 --format=: (1.4217345s)
	* I0310 19:20:34.719624    1104 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 19:20:34.720600    1104 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	* I0310 19:20:34.720600    1104 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	* I0310 19:20:34.733515    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:34.875578    1104 pod_ready.go:97] pod "coredns-74ff55c5b-62r9g" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:18:52 +0000 GMT Reason: Message:}
	* I0310 19:20:34.875578    1104 pod_ready.go:62] duration metric: took 175.7577ms to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	* I0310 19:20:34.875578    1104 pod_ready.go:59] waiting 6m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	* I0310 19:20:34.886356    1104 addons.go:134] Setting addon default-storageclass=true in "functional-20210310191609-6496"
	* W0310 19:20:34.886839    1104 addons.go:143] addon default-storageclass should already be in state true
	* I0310 19:20:34.886839    1104 host.go:66] Checking if "functional-20210310191609-6496" exists ...
	* I0310 19:20:34.912011    1104 cli_runner.go:115] Run: docker container inspect functional-20210310191609-6496 --format=
	* I0310 19:20:34.915007    1104 pod_ready.go:97] pod "etcd-functional-20210310191609-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:19:36 +0000 GMT Reason: Message:}
	* I0310 19:20:34.915007    1104 pod_ready.go:62] duration metric: took 39.429ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	* I0310 19:20:34.915007    1104 pod_ready.go:59] waiting 6m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	* I0310 19:20:34.963756    1104 ssh_runner.go:149] Run: docker images --format :
	* I0310 19:20:34.966773    1104 pod_ready.go:97] pod "kube-apiserver-functional-20210310191609-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:18:29 +0000 GMT Reason: Message:}
	* I0310 19:20:34.966773    1104 pod_ready.go:62] duration metric: took 51.766ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	* I0310 19:20:34.966773    1104 pod_ready.go:59] waiting 6m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	* I0310 19:20:34.976756    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:34.993003    1104 pod_ready.go:97] pod "kube-controller-manager-functional-20210310191609-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:19:47 +0000 GMT Reason: Message:}
	* I0310 19:20:34.993003    1104 pod_ready.go:62] duration metric: took 26.2296ms to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	* I0310 19:20:34.993003    1104 pod_ready.go:59] waiting 6m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	* I0310 19:20:35.040834    1104 pod_ready.go:97] pod "kube-proxy-l9bb9" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:18:39 +0000 GMT Reason: Message:}
	* I0310 19:20:35.040834    1104 pod_ready.go:62] duration metric: took 47.8313ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	* I0310 19:20:35.040834    1104 pod_ready.go:59] waiting 6m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	* I0310 19:20:35.061163    1104 pod_ready.go:97] pod "kube-scheduler-functional-20210310191609-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:19:27 +0000 GMT Reason: Message:}
	* I0310 19:20:35.061163    1104 pod_ready.go:62] duration metric: took 20.3296ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	* I0310 19:20:35.061309    1104 pod_ready.go:39] duration metric: took 361.4889ms for extra waiting for kube-system core pods to be Ready ...
	* I0310 19:20:35.061309    1104 api_server.go:48] waiting for apiserver process to appear ...
	* I0310 19:20:35.076668    1104 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 19:20:35.204930    1104 command_runner.go:124] > 2613
	* I0310 19:20:35.205102    1104 api_server.go:68] duration metric: took 2.0043634s to wait for apiserver process to appear ...
	* I0310 19:20:35.205102    1104 api_server.go:84] waiting for apiserver healthz status ...
	* I0310 19:20:35.205102    1104 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55006/healthz ...
	* I0310 19:20:35.261616    1104 api_server.go:241] https://127.0.0.1:55006/healthz returned 200:
	* ok
	* I0310 19:20:35.269524    1104 api_server.go:137] control plane version: v1.20.2
	* I0310 19:20:35.269524    1104 api_server.go:127] duration metric: took 64.4221ms to wait for apiserver health ...
	* I0310 19:20:35.270589    1104 system_pods.go:41] waiting for kube-system pods to appear ...
	* I0310 19:20:35.313210    1104 system_pods.go:57] 7 kube-system pods found
	* I0310 19:20:35.313210    1104 system_pods.go:59] "coredns-74ff55c5b-62r9g" [022268ac-67b5-4170-a85a-465abd0c06b3] Running
	* I0310 19:20:35.313210    1104 system_pods.go:59] "etcd-functional-20210310191609-6496" [8cd656a0-0ce1-4093-8b84-f741cdac3d9c] Running
	* I0310 19:20:35.313210    1104 system_pods.go:59] "kube-apiserver-functional-20210310191609-6496" [e016d4fc-2c83-4884-880e-6dc489c823b2] Running
	* I0310 19:20:35.313210    1104 system_pods.go:59] "kube-controller-manager-functional-20210310191609-6496" [c6b12efd-d118-4e47-bf42-80bdaf433269] Running
	* I0310 19:20:35.313210    1104 system_pods.go:59] "kube-proxy-l9bb9" [8158730c-c5c3-4b01-93d9-ebc43ef2189c] Running
	* I0310 19:20:35.313210    1104 system_pods.go:59] "kube-scheduler-functional-20210310191609-6496" [6f7854fa-e162-4728-a3be-6ad9fd54e756] Running
	* I0310 19:20:35.313210    1104 system_pods.go:59] "storage-provisioner" [b3c89307-430b-4b9e-bf19-ea94207564fe] Running
	* I0310 19:20:35.313210    1104 system_pods.go:72] duration metric: took 42.6212ms to wait for pod list to return data ...
	* I0310 19:20:35.313210    1104 default_sa.go:33] waiting for default service account to be created ...
	* I0310 19:20:35.334374    1104 default_sa.go:44] found service account: "default"
	* I0310 19:20:35.334374    1104 default_sa.go:54] duration metric: took 21.1637ms for default service account to be created ...
	* I0310 19:20:35.334374    1104 system_pods.go:114] waiting for k8s-apps to be running ...
	* I0310 19:20:35.380311    1104 system_pods.go:84] 7 kube-system pods found
	* I0310 19:20:35.380848    1104 system_pods.go:87] "coredns-74ff55c5b-62r9g" [022268ac-67b5-4170-a85a-465abd0c06b3] Running
	* I0310 19:20:35.380848    1104 system_pods.go:87] "etcd-functional-20210310191609-6496" [8cd656a0-0ce1-4093-8b84-f741cdac3d9c] Running
	* I0310 19:20:35.380848    1104 system_pods.go:87] "kube-apiserver-functional-20210310191609-6496" [e016d4fc-2c83-4884-880e-6dc489c823b2] Running
	* I0310 19:20:35.380848    1104 system_pods.go:87] "kube-controller-manager-functional-20210310191609-6496" [c6b12efd-d118-4e47-bf42-80bdaf433269] Running
	* I0310 19:20:35.380848    1104 system_pods.go:87] "kube-proxy-l9bb9" [8158730c-c5c3-4b01-93d9-ebc43ef2189c] Running
	* I0310 19:20:35.380848    1104 system_pods.go:87] "kube-scheduler-functional-20210310191609-6496" [6f7854fa-e162-4728-a3be-6ad9fd54e756] Running
	* I0310 19:20:35.380848    1104 system_pods.go:87] "storage-provisioner" [b3c89307-430b-4b9e-bf19-ea94207564fe] Running
	* I0310 19:20:35.380848    1104 system_pods.go:124] duration metric: took 46.4741ms to wait for k8s-apps to be running ...
	* I0310 19:20:35.380848    1104 system_svc.go:44] waiting for kubelet service to be running ....
	* I0310 19:20:35.380848    1104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55009 SSHKeyPath:C:\Users\jenkins\.minikube\machines\functional-20210310191609-6496\id_rsa Username:docker}
	* I0310 19:20:35.398760    1104 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	* I0310 19:20:35.462933    1104 system_svc.go:56] duration metric: took 82.0859ms WaitForService to wait for kubelet.
	* I0310 19:20:35.462933    1104 node_ready.go:35] waiting 6m0s for node status to be ready ...
	* I0310 19:20:35.487573    1104 node_ready.go:38] duration metric: took 24.6395ms to wait for WaitForNodeReady...
	* I0310 19:20:35.492703    1104 kubeadm.go:541] duration metric: took 2.2919649s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	* I0310 19:20:35.493031    1104 node_conditions.go:101] verifying NodePressure condition ...
	* I0310 19:20:35.526833    1104 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	* I0310 19:20:35.526833    1104 node_conditions.go:122] node cpu capacity is 4
	* I0310 19:20:35.527023    1104 node_conditions.go:104] duration metric: took 33.8021ms to run NodePressure ...
	* I0310 19:20:35.527023    1104 start.go:208] waiting for startup goroutines ...
	* I0310 19:20:35.586041    1104 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	* I0310 19:20:35.586041    1104 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	* I0310 19:20:35.597239    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	* I0310 19:20:35.609092    1104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55009 SSHKeyPath:C:\Users\jenkins\.minikube\machines\functional-20210310191609-6496\id_rsa Username:docker}
	* I0310 19:20:35.703679    1104 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	* I0310 19:20:36.159336    1104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55009 SSHKeyPath:C:\Users\jenkins\.minikube\machines\functional-20210310191609-6496\id_rsa Username:docker}
	* I0310 19:20:36.457041    1104 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	* I0310 19:20:36.780641    1104 command_runner.go:124] > serviceaccount/storage-provisioner unchanged
	* I0310 19:20:36.780641    1104 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	* I0310 19:20:36.780641    1104 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	* I0310 19:20:36.780641    1104 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	* I0310 19:20:36.780641    1104 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath unchanged
	* I0310 19:20:36.780641    1104 command_runner.go:124] > pod/storage-provisioner configured
	* I0310 19:20:36.781230    1104 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0769274s)
	* I0310 19:20:36.781230    1104 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.20.2
	* I0310 19:20:36.781230    1104 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.20.2
	* I0310 19:20:36.781230    1104 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.20.2
	* I0310 19:20:36.781230    1104 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.20.2
	* I0310 19:20:36.781230    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210105233232-2512
	* I0310 19:20:36.781230    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210106002159-6856
	* I0310 19:20:36.781230    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210106011107-6492
	* I0310 19:20:36.781230    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210106215525-1984
	* I0310 19:20:36.781230    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210107002220-9088
	* I0310 19:20:36.781680    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210107190945-8748
	* I0310 19:20:36.781680    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210112045103-7160
	* I0310 19:20:36.781680    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210114204234-6692
	* I0310 19:20:36.781680    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210115023213-8464
	* I0310 19:20:36.781680    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210115191024-3516
	* I0310 19:20:36.781680    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210119220838-6552
	* I0310 19:20:36.781680    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120022529-1140
	* I0310 19:20:36.781680    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120175851-7432
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120214442-10992
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210120231122-7024
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210123004019-5372
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210126212539-5172
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210128021318-232
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210212145109-352
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210213143925-7440
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210219145454-9520
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210219220622-3920
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210220004129-7452
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210224014800-800
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210225231842-5736
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210301195830-5700
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210303214129-4588
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210304002630-1156
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210304184021-4052
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210306072141-12056
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210308233820-5396
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210309234032-4944
	* I0310 19:20:36.782091    1104 command_runner.go:124] > minikube-local-cache-test:functional-20210310083645-5040
	* I0310 19:20:36.782091    1104 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	* I0310 19:20:36.782091    1104 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 19:20:36.782091    1104 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	* I0310 19:20:36.782091    1104 command_runner.go:124] > k8s.gcr.io/coredns:1.7.0
	* I0310 19:20:36.782091    1104 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	* I0310 19:20:36.782091    1104 command_runner.go:124] > k8s.gcr.io/pause:3.2
	* I0310 19:20:36.782473    1104 ssh_runner.go:189] Completed: docker images --format :: (1.8187207s)
	* I0310 19:20:36.782473    1104 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* minikube-local-cache-test:functional-20210105233232-2512
	* minikube-local-cache-test:functional-20210106002159-6856
	* minikube-local-cache-test:functional-20210106011107-6492
	* minikube-local-cache-test:functional-20210106215525-1984
	* minikube-local-cache-test:functional-20210107002220-9088
	* minikube-local-cache-test:functional-20210107190945-8748
	* minikube-local-cache-test:functional-20210112045103-7160
	* minikube-local-cache-test:functional-20210114204234-6692
	* minikube-local-cache-test:functional-20210115023213-8464
	* minikube-local-cache-test:functional-20210115191024-3516
	* minikube-local-cache-test:functional-20210119220838-6552
	* minikube-local-cache-test:functional-20210120022529-1140
	* minikube-local-cache-test:functional-20210120175851-7432
	* minikube-local-cache-test:functional-20210120214442-10992
	* minikube-local-cache-test:functional-20210120231122-7024
	* minikube-local-cache-test:functional-20210123004019-5372
	* minikube-local-cache-test:functional-20210126212539-5172
	* minikube-local-cache-test:functional-20210128021318-232
	* minikube-local-cache-test:functional-20210212145109-352
	* minikube-local-cache-test:functional-20210213143925-7440
	* minikube-local-cache-test:functional-20210219145454-9520
	* minikube-local-cache-test:functional-20210219220622-3920
	* minikube-local-cache-test:functional-20210220004129-7452
	* minikube-local-cache-test:functional-20210224014800-800
	* minikube-local-cache-test:functional-20210225231842-5736
	* minikube-local-cache-test:functional-20210301195830-5700
	* minikube-local-cache-test:functional-20210303214129-4588
	* minikube-local-cache-test:functional-20210304002630-1156
	* minikube-local-cache-test:functional-20210304184021-4052
	* minikube-local-cache-test:functional-20210306072141-12056
	* minikube-local-cache-test:functional-20210308233820-5396
	* minikube-local-cache-test:functional-20210309234032-4944
	* minikube-local-cache-test:functional-20210310083645-5040
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 19:20:36.782473    1104 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 19:20:36.782473    1104 cache_images.go:223] succeeded pushing to: functional-20210310191609-6496
	* I0310 19:20:36.782473    1104 cache_images.go:224] failed pushing to: 
	* I0310 19:20:37.034994    1104 command_runner.go:124] > storageclass.storage.k8s.io/standard unchanged
	* I0310 19:20:37.039979    1104 out.go:129] * Enabled addons: storage-provisioner, default-storageclass
	* I0310 19:20:37.040289    1104 addons.go:383] enableAddons completed in 3.837143s
	* I0310 19:20:37.226172    1104 start.go:460] kubectl: 1.19.3, cluster: 1.20.2 (minor skew: 1)
	* I0310 19:20:37.231266    1104 out.go:129] * Done! kubectl is now configured to use "functional-20210310191609-6496" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 19:21:28.646469    4596 out.go:340] unable to execute * 2021-03-10 19:18:33.472619 W | etcdserver: request "header:<ID:10490704450423578322 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-l9bb9.166b114ce3ebff9c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-l9bb9.166b114ce3ebff9c\" value_size:678 lease:1267332413568802112 >> failure:<>>" with result "size:16" took too long (104.07ms) to execute
	: html/template:* 2021-03-10 19:18:33.472619 W | etcdserver: request "header:<ID:10490704450423578322 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-l9bb9.166b114ce3ebff9c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-l9bb9.166b114ce3ebff9c\" value_size:678 lease:1267332413568802112 >> failure:<>>" with result "size:16" took too long (104.07ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 19:21:28.665834    4596 out.go:340] unable to execute * 2021-03-10 19:18:34.357536 W | etcdserver: request "header:<ID:10490704450423578352 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" mod_revision:425 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" value_size:3642 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" > >>" with result "size:16" took too long (109.3333ms) to execute
	: html/template:* 2021-03-10 19:18:34.357536 W | etcdserver: request "header:<ID:10490704450423578352 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" mod_revision:425 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" value_size:3642 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" > >>" with result "size:16" took too long (109.3333ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 19:21:30.051231    4596 out.go:335] unable to parse "* I0310 19:20:05.504226    1104 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 19:20:05.504226    1104 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 19:21:30.082729    4596 out.go:335] unable to parse "* I0310 19:20:06.361847    1104 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 19:20:06.361847    1104 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 19:21:30.168712    4596 out.go:340] unable to execute * I0310 19:20:10.078803    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:10.078803    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:10.078803    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:30.178709    4596 out.go:335] unable to parse "* I0310 19:20:10.606885    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}\n": template: * I0310 19:20:10.606885    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 19:21:30.198628    4596 out.go:340] unable to execute * I0310 19:20:10.913181    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:10.913181    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:10.913181    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:30.209855    4596 out.go:335] unable to parse "* I0310 19:20:11.408232    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}\n": template: * I0310 19:20:11.408232    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 19:21:30.331509    4596 out.go:340] unable to execute * I0310 19:20:12.606842    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:12.606842    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:12.606842    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:30.374500    4596 out.go:340] unable to execute * I0310 19:20:13.473873    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:13.473873    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:13.473873    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:30.384584    4596 out.go:335] unable to parse "* I0310 19:20:13.985934    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}\n": template: * I0310 19:20:13.985934    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 19:21:30.411259    4596 out.go:340] unable to execute * I0310 19:20:14.236745    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:14.236745    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:14.236745    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:30.421273    4596 out.go:335] unable to parse "* I0310 19:20:14.751198    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}\n": template: * I0310 19:20:14.751198    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 19:21:30.789836    4596 out.go:340] unable to execute * I0310 19:20:15.034198    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:15.034198    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:15.034198    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:30.798839    4596 out.go:335] unable to parse "* I0310 19:20:15.546112    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}\n": template: * I0310 19:20:15.546112    1104 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0xd0c4a0] 0xd0c460 <nil>  [] 0s} 127.0.0.1 55009 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 19:21:30.826835    4596 out.go:340] unable to execute * I0310 19:20:15.812165    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:15.812165    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:15.812165    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:30.941263    4596 out.go:340] unable to execute * I0310 19:20:16.772428    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:16.772428    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:16.772428    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:30.969969    4596 out.go:340] unable to execute * I0310 19:20:17.964152    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:17.964152    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:17.964152    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:30.979941    4596 out.go:340] unable to execute * I0310 19:20:17.967157    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:17.967157    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:17.967157    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:31.358105    4596 out.go:340] unable to execute * I0310 19:20:20.432844    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:20.432844    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:20.432844    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8441/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "8441/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:33.963994    4596 out.go:340] unable to execute * I0310 19:20:32.285052    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:32.285052    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:32.285052    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8441/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "8441/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:34.260598    4596 out.go:340] unable to execute * I0310 19:20:33.344195    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:33.344195    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:33.344195    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8441/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "8441/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:34.756368    4596 out.go:340] unable to execute * I0310 19:20:34.699559    1104 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496: (1.3550889s)
	: template: * I0310 19:20:34.699559    1104 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20210310191609-6496: (1.3550889s)
	:1:102: executing "* I0310 19:20:34.699559    1104 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8441/tcp\") 0).HostPort}}'\" functional-20210310191609-6496: (1.3550889s)\n" at <index .NetworkSettings.Ports "8441/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:34.781144    4596 out.go:340] unable to execute * I0310 19:20:34.733515    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:34.733515    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:34.733515    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:34.855630    4596 out.go:340] unable to execute * I0310 19:20:34.976756    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:34.976756    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:34.976756    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:21:35.075764    4596 out.go:340] unable to execute * I0310 19:20:35.597239    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	: template: * I0310 19:20:35.597239    1104 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210310191609-6496
	:1:96: executing "* I0310 19:20:35.597239    1104 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" functional-20210310191609-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20210310191609-6496 -n functional-20210310191609-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20210310191609-6496 -n functional-20210310191609-6496: (2.9890138s)
helpers_test.go:257: (dbg) Run:  kubectl --context functional-20210310191609-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context functional-20210310191609-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context functional-20210310191609-6496 describe pod : exit status 1 (174.0526ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context functional-20210310191609-6496 describe pod : exit status 1
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (19.70s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (42.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:227: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210310191609-6496 docker-env | Invoke-Expression ;out/minikube-windows-amd64.exe status -p functional-20210310191609-6496"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:227: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20210310191609-6496 docker-env | Invoke-Expression ;out/minikube-windows-amd64.exe status -p functional-20210310191609-6496": (12.5003872s)
functional_test.go:248: (dbg) Run:  powershell.exe -NoProfile -NonInteractive out/minikube-windows-amd64.exe "-p functional-20210310191609-6496 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:248: (dbg) Done: powershell.exe -NoProfile -NonInteractive out/minikube-windows-amd64.exe "-p functional-20210310191609-6496 docker-env | Invoke-Expression ; docker images": (7.4351648s)
functional_test.go:264: expected 'docker images' to have "gcr.io/k8s-minikube/storage-provisioner" inside minikube. but the output is: *
-- stdout --
	REPOSITORY                           TAG                               IMAGE ID       CREATED         SIZE
	busybox                              functional-20210309234032-4944    a9d583973f65   22 hours ago    1.23MB
	busybox                              functional-20210310083645-5040    a9d583973f65   22 hours ago    1.23MB
	busybox                              functional-20210310191609-6496    a9d583973f65   22 hours ago    1.23MB
	busybox                              latest                            a9d583973f65   22 hours ago    1.23MB
	gcr.io/k8s-minikube/kicbase          v0.0.18                           a776c544501a   12 days ago     1.08GB
	gcr.io/k8s-minikube/kicbase-builds   v0.0.17-1614202509-10427          c327a49df606   13 days ago     1.08GB
	gcr.io/k8s-minikube/kicbase-builds   v0.0.17-1613934488-10548          098df99b40c7   2 weeks ago     1.02GB
	gcr.io/k8s-minikube/kicbase-builds   v0.0.17-1613704090-10418          4a0aa937f6ff   2 weeks ago     1.02GB
	gcr.io/k8s-minikube/kicbase-builds   v0.0.17-1613701030-10408          9baf26b8c10d   2 weeks ago     1.02GB
	busybox                              functional-20210219145454-9520    491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210219220622-3920    491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210220004129-7452    491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210224014800-800     491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210225231842-5736    491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210301195830-5700    491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210303214129-4588    491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210304002630-1156    491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210304184021-4052    491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210306072141-12056   491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210308233820-5396    491198851f0c   2 weeks ago     1.23MB
	busybox                              functional-20210212145109-352     22667f53682a   5 weeks ago     1.23MB
	busybox                              functional-20210213143925-7440    22667f53682a   5 weeks ago     1.23MB
	gcr.io/k8s-minikube/kicbase          v0.0.17                           a9b1f16d8ece   6 weeks ago     985MB
	gcr.io/k8s-minikube/kicbase          v0.0.16-snapshot1                 dc97b09697eb   8 weeks ago     984MB
	minikube-local-cache-test            functional-20210114204234-6692    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210115023213-8464    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210115191024-3516    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210119220838-6552    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210120022529-1140    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210120175851-7432    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210120214442-10992   3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210120231122-7024    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210123004019-5372    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210126212539-5172    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210128021318-232     3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210212145109-352     3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210213143925-7440    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210219145454-9520    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210219220622-3920    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210220004129-7452    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210224014800-800     3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210225231842-5736    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210301195830-5700    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210303214129-4588    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210304002630-1156    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210304184021-4052    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210306072141-12056   3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210308233820-5396    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210309234032-4944    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210310083645-5040    3124ee2c11ad   2 months ago    30B
	minikube-local-cache-test            functional-20210310191609-6496    3124ee2c11ad   2 months ago    30B
	gcr.io/k8s-minikube/kicbase          v0.0.15-snapshot4                 06db6ca72446   3 months ago    941MB
	gcr.io/k8s-minikube/kicbase          v0.0.8                            11589cdc9ef4   11 months ago   964MB
	gcr.io/k8s-minikube/kicbase          v0.0.7                            7980bce73693   12 months ago   935MB

                                                
                                                
-- /stdout --
** stderr ** 
	Invoke-Expression : At line:1 char:36
	+ REM To point your shell to minikube's docker-daemon, run:
	+                                    ~~~~~~~~~~~~~~~~~~~~~~
	The string is missing the terminator: '.
	At line:1 char:79
	+ ... functional-20210310191609-6496 docker-env | Invoke-Expression ; docke ...
	+                                                 ~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : ParserError: (:) [Invoke-Expression], ParseException
	    + FullyQualifiedErrorId : TerminatorExpectedAtEndOfString,Microsoft.PowerShell.Commands.InvokeExpressionCommand
	 

                                                
                                                
** /stderr ***
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect functional-20210310191609-6496
helpers_test.go:231: (dbg) docker inspect functional-20210310191609-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939",
	        "Created": "2021-03-10T19:16:21.3827053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19629,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T19:16:22.633721Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/hosts",
	        "LogPath": "/var/lib/docker/containers/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939/f0e67f0e0197d1e3ab1db5142ae7f8b4a9b85bcae654c8d5257d095025940939-json.log",
	        "Name": "/functional-20210310191609-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20210310191609-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20210310191609-6496",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a64bc965504dadb26a6b09e565a5346138fc3887af6c9d3d4f52f649a4e3dbbd-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a64bc965504dadb26a6b09e565a5346138fc3887af6c9d3d4f52f649a4e3dbbd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a64bc965504dadb26a6b09e565a5346138fc3887af6c9d3d4f52f649a4e3dbbd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a64bc965504dadb26a6b09e565a5346138fc3887af6c9d3d4f52f649a4e3dbbd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-20210310191609-6496",
	                "Source": "/var/lib/docker/volumes/functional-20210310191609-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20210310191609-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20210310191609-6496",
	                "name.minikube.sigs.k8s.io": "functional-20210310191609-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "654c08b325fa53207f6e230c568d3463f359afca0d7983f08a2cc1a320ecda5f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55009"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55008"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55005"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55007"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55006"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/654c08b325fa",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20210310191609-6496": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.97"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f0e67f0e0197",
	                        "functional-20210310191609-6496"
	                    ],
	                    "NetworkID": "2f4279ec0a83c0de1765b109cd172864e996066e0bc6a9bf6eb83db56ffdda48",
	                    "EndpointID": "3933dabd8ffc19bc09f3b793d526cbcd953070264dde135241cd56427f7ca44b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:61",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20210310191609-6496 -n functional-20210310191609-6496

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20210310191609-6496 -n functional-20210310191609-6496: (4.6962061s)
helpers_test.go:240: <<< TestFunctional/parallel/DockerEnv FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 logs -n 25

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 logs -n 25: (12.9151092s)
helpers_test.go:248: TestFunctional/parallel/DockerEnv logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 19:16:23 UTC, end at Wed 2021-03-10 19:25:04 UTC. --
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.188012700Z" level=info msg="ignoring event" container=929909ec81114970bce31e3da88ccf20f2211854116bc96c7bd5cc4089076b5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.190000600Z" level=info msg="ignoring event" container=612593eab637b14655f6f4f7f15415e99aa65c606a966f5e643568cb4ee3a975 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.257885900Z" level=info msg="ignoring event" container=53e6441ccde57abea4085675b552b13cb56fcade978237fc9f286d120010ada9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.368042900Z" level=info msg="ignoring event" container=0608fef2accd6b37a3f017c5e33f8df5cf550048cd95512e2c111b8434475c71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.382889100Z" level=info msg="ignoring event" container=86bce7009e19b0aaf4e9922662c6e8fcdb3312868f43d0d7f3205da77466a94e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.456030100Z" level=info msg="ignoring event" container=42fe31721a7a0100981c7b39ca07fc3c27403b00edc21a944bcad7aac52d770c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.553022700Z" level=info msg="ignoring event" container=518bc4942e786a11e0abeb74c11142fd97fdecdeed4afaf25e98c21a200670ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.577622100Z" level=info msg="ignoring event" container=b49f3dbd8888f19eaae38252c10d1111896efd6dbe539bddae4d3bb419a31218 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.577694000Z" level=info msg="ignoring event" container=b384eab6bf968ee2c0c9cb7abd9224f174595903ac1c9c3f4bad13a503626483 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.661980700Z" level=info msg="ignoring event" container=a135c3a8b1c4b35aef1afd7c933589d4b693fbcd7d97013c5201d45e2800cebe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.746842000Z" level=info msg="ignoring event" container=264af7edaaccfb9bc7f0c8e1782e0e4c641a03a70cce2c67d6369ddfb18bbe03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:08 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:08.774597800Z" level=info msg="ignoring event" container=3746c3619218fb229716eca74d3f74a82982ebd9fca361ff72fb0d09115d988e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:10 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:10.976699100Z" level=info msg="ignoring event" container=3ef151b16cbf4ad453e799eb810ca3381c9116ca769d348e7a6a7923cd4a2e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:11 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:11.561623800Z" level=info msg="ignoring event" container=a91768028d65b5c1475939522cf348ea6db0af2da29dadcf1d3804dc2026185e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:12 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:12.964042900Z" level=info msg="ignoring event" container=05d191ffdb7d251ce2602b58746175b3a9d72da82beb796177483448fa04e4b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:13 functional-20210310191609-6496 dockerd[755]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	* Mar 10 19:22:13 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:13.466144000Z" level=error msg="Handler for GET /v1.40/containers/bf1329e5bb9f21f31822dee55b16beee95a624bef71445aecd0b76a96b98fcf3/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	* Mar 10 19:22:28 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:28.888806200Z" level=info msg="ignoring event" container=2583ffba0baabd860447ee0978378d2123f0eb6ce0b58ff9f2a1589fefa3031c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:31 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:31.236734500Z" level=info msg="ignoring event" container=de4c09ce17288bc303d450060fbfd018d83ee1c42e621c1a00b7da04b9ffc87c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:33.552815300Z" level=info msg="ignoring event" container=33f203fb35ef142fafcf7ba273f5f4aaac38587b72c425c2f08b0d76f7b1c31e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:33.641772300Z" level=info msg="ignoring event" container=4ae9d3937c41e449a01b6fe58c2157e7d9da26bf889ef5ba165ea54c5f601162 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:33 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:33.942539300Z" level=info msg="ignoring event" container=351291380dd336fbd8e28f7b6784a6a03fcd8ae96103337831a89b72ccaf43c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:41 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:41.428954100Z" level=info msg="ignoring event" container=4ee3320a9a6789fc9d8c5c0f94c3c31604a33ddf2eb9c49fa4de995071a32acb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:48 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:48.686065800Z" level=info msg="ignoring event" container=6d6ff635030096ee2cb4e0e5d86f840d72342a21bb551ae23ba5c16c48095058 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 19:22:56 functional-20210310191609-6496 dockerd[755]: time="2021-03-10T19:22:56.260929800Z" level=info msg="ignoring event" container=ebd1c5c99c539f3674f709a69f623fdf7a4462baee474920ecab87cfa13c3f72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	* 6fa71db88a442       85069258b98ac       About a minute ago   Running             storage-provisioner       4                   a2e8cecd3b688
	* f3f54b45a7683       a27166429d98e       About a minute ago   Running             kube-controller-manager   3                   468d8cbe64bb0
	* d1301ac39e571       a8c2fdb8bf76e       2 minutes ago        Running             kube-apiserver            2                   d38a70f8137d7
	* 6d6ff63503009       85069258b98ac       2 minutes ago        Exited              storage-provisioner       3                   a2e8cecd3b688
	* ebd1c5c99c539       a27166429d98e       2 minutes ago        Exited              kube-controller-manager   2                   468d8cbe64bb0
	* 29d79fed5a007       bfe3a36ebd252       2 minutes ago        Running             coredns                   1                   a38d1823eb160
	* de4c09ce17288       a8c2fdb8bf76e       2 minutes ago        Exited              kube-apiserver            1                   d38a70f8137d7
	* c876fd7259ddc       ed2c44fbdd78b       2 minutes ago        Running             kube-scheduler            2                   2e4fa1b896e38
	* c88803c2d3281       ed2c44fbdd78b       2 minutes ago        Created             kube-scheduler            1                   2e4fa1b896e38
	* bf1329e5bb9f2       43154ddb57a83       2 minutes ago        Running             kube-proxy                1                   b3fb4867a0ad4
	* c63fd06ccefa0       0369cf4303ffd       2 minutes ago        Running             etcd                      1                   9e5fe80a8880d
	* 05d191ffdb7d2       bfe3a36ebd252       6 minutes ago        Exited              coredns                   0                   a135c3a8b1c4b
	* b384eab6bf968       43154ddb57a83       6 minutes ago        Exited              kube-proxy                0                   264af7edaaccf
	* 53e6441ccde57       0369cf4303ffd       7 minutes ago        Exited              etcd                      0                   0608fef2accd6
	* 
	* ==> coredns [05d191ffdb7d] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* 
	* ==> coredns [29d79fed5a00] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* E0310 19:22:34.880675       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:34.880702       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:34.882059       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:36.035204       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:36.215863       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:36.435588       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:38.142963       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:38.315501       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:38.916871       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:41.828749       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:41.845578       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E0310 19:22:43.081590       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* 
	* ==> describe nodes <==
	* Name:               functional-20210310191609-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=functional-20210310191609-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=functional-20210310191609-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T19_18_18_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 19:18:12 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  functional-20210310191609-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 19:24:58 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 19:25:01 +0000   Wed, 10 Mar 2021 19:18:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 19:25:01 +0000   Wed, 10 Mar 2021 19:18:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 19:25:01 +0000   Wed, 10 Mar 2021 19:18:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 19:25:01 +0000   Wed, 10 Mar 2021 19:18:31 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.97
	*   Hostname:    functional-20210310191609-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                998d3515-9968-4eb1-814a-bc80eeeac66f
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (11 in total)
	*   Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	*   default                     hello-node-6cbfcd7cbc-9qfsw                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	*   default                     mysql-9bbbc5bbb-fk6dk                                     600m (15%)    700m (17%)  512Mi (2%)       700Mi (3%)     38s
	*   default                     nginx-svc                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	*   default                     sp-pod                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	*   kube-system                 coredns-74ff55c5b-62r9g                                   100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m34s
	*   kube-system                 etcd-functional-20210310191609-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         6m44s
	*   kube-system                 kube-apiserver-functional-20210310191609-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	*   kube-system                 kube-controller-manager-functional-20210310191609-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         6m44s
	*   kube-system                 kube-proxy-l9bb9                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	*   kube-system                 kube-scheduler-functional-20210310191609-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         6m44s
	*   kube-system                 storage-provisioner                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests     Limits
	*   --------           --------     ------
	*   cpu                1350m (33%)  700m (17%)
	*   memory             682Mi (3%)   870Mi (4%)
	*   ephemeral-storage  100Mi (0%)   0 (0%)
	*   hugepages-1Gi      0 (0%)       0 (0%)
	*   hugepages-2Mi      0 (0%)       0 (0%)
	* Events:
	*   Type    Reason                   Age                    From        Message
	*   ----    ------                   ----                   ----        -------
	*   Normal  NodeHasSufficientMemory  7m6s (x7 over 7m7s)    kubelet     Node functional-20210310191609-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    7m6s (x7 over 7m7s)    kubelet     Node functional-20210310191609-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     7m6s (x6 over 7m7s)    kubelet     Node functional-20210310191609-6496 status is now: NodeHasSufficientPID
	*   Normal  Starting                 6m46s                  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  6m46s                  kubelet     Node functional-20210310191609-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    6m46s                  kubelet     Node functional-20210310191609-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     6m46s                  kubelet     Node functional-20210310191609-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             6m45s                  kubelet     Node functional-20210310191609-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  6m45s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                6m35s                  kubelet     Node functional-20210310191609-6496 status is now: NodeReady
	*   Normal  Starting                 6m27s                  kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 2m44s                  kubelet     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  2m43s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientMemory  2m41s (x8 over 2m44s)  kubelet     Node functional-20210310191609-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m41s (x7 over 2m44s)  kubelet     Node functional-20210310191609-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m41s (x8 over 2m44s)  kubelet     Node functional-20210310191609-6496 status is now: NodeHasSufficientPID
	*   Normal  Starting                 2m35s                  kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000001]  hrtimer_wakeup+0x1e/0x21
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [53e6441ccde5] <==
	* 2021-03-10 19:18:45.035754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:18:55.038680 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:04.993917 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:14.993996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:24.994220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:34.995374 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:44.998685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:19:54.996039 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:04.997415 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:14.997241 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:24.995916 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:35.003865 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:44.993181 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:20:54.990512 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:21:05.075092 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:21:14.997501 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:21:24.996927 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:21:34.998667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:21:44.995526 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:21:55.000410 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:22:04.997217 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:22:07.270719 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2021/03/10 19:22:07 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* WARNING: 2021/03/10 19:22:07 grpc: addrConn.createTransport failed to connect to {192.168.49.97:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.49.97:2379: connect: connection refused". Reconnecting...
	* 2021-03-10 19:22:07.362074 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> etcd [c63fd06ccefa] <==
	* 2021-03-10 19:22:12.663691 I | embed: serving client requests on 127.0.0.1:2379
	* 2021-03-10 19:22:29.238523 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:22:34.394513 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:22:44.394087 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:22:54.393977 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:23:04.393329 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:23:14.435212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:23:24.393528 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:23:34.393312 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:23:44.394103 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:23:54.393234 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:24:04.394079 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:24:14.394781 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:24:24.413837 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:24:25.130254 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" " with result "range_response_count:1 size:260" took too long (134.165ms) to execute
	* 2021-03-10 19:24:25.651199 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:7 size:36534" took too long (207.0149ms) to execute
	* 2021-03-10 19:24:25.756249 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:118" took too long (275.6952ms) to execute
	* 2021-03-10 19:24:25.993047 W | etcdserver: read-only range request "key:\"/registry/replicasets/default/hello-node-6cbfcd7cbc\" " with result "range_response_count:1 size:1752" took too long (120.531ms) to execute
	* 2021-03-10 19:24:26.696689 W | etcdserver: read-only range request "key:\"/registry/deployments/default/mysql\" " with result "range_response_count:0 size:5" took too long (103.9024ms) to execute
	* 2021-03-10 19:24:26.891785 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/nginx-svc\" " with result "range_response_count:0 size:5" took too long (245.9744ms) to execute
	* 2021-03-10 19:24:28.536773 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:7 size:36534" took too long (144.214ms) to execute
	* 2021-03-10 19:24:34.435513 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:24:44.401670 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:24:54.394891 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:25:04.396039 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  19:25:08 up 25 min,  0 users,  load average: 8.75, 5.45, 4.34
	* Linux functional-20210310191609-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [d1301ac39e57] <==
	* I0310 19:23:31.647698       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:23:31.647718       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:24:08.271323       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:24:08.271932       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:24:08.271967       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:24:19.374697       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	* I0310 19:24:19.454345       1 controller.go:609] quota admission added evaluator for: deployments.apps
	* I0310 19:24:19.595357       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	* I0310 19:24:19.670177       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I0310 19:24:19.685282       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* I0310 19:24:24.845776       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	* I0310 19:24:25.902998       1 controller.go:609] quota admission added evaluator for: events.events.k8s.io
	* I0310 19:24:26.151043       1 trace.go:205] Trace[989470014]: "GuaranteedUpdate etcd3" type:*core.RangeAllocation (10-Mar-2021 19:24:25.478) (total time: 672ms):
	* Trace[989470014]: ---"initial value restored" 363ms (19:24:00.842)
	* Trace[989470014]: ---"Transaction committed" 306ms (19:24:00.149)
	* Trace[989470014]: [672.5942ms] [672.5942ms] END
	* I0310 19:24:26.280664       1 trace.go:205] Trace[1258039178]: "Create" url:/api/v1/namespaces/default/services,user-agent:kubectl/v1.19.3 (windows/amd64) kubernetes/1e11e4a,client:192.168.49.1 (10-Mar-2021 19:24:25.385) (total time: 895ms):
	* Trace[1258039178]: ---"Object stored in database" 825ms (19:24:00.280)
	* Trace[1258039178]: [895.1547ms] [895.1547ms] END
	* I0310 19:24:27.777076       1 trace.go:205] Trace[433587762]: "Create" url:/api/v1/namespaces/default/services,user-agent:kubectl/v1.19.3 (windows/amd64) kubernetes/1e11e4a,client:192.168.49.1 (10-Mar-2021 19:24:27.260) (total time: 516ms):
	* Trace[433587762]: ---"Object stored in database" 515ms (19:24:00.776)
	* Trace[433587762]: [516.2712ms] [516.2712ms] END
	* I0310 19:24:51.692658       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:24:51.692772       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:24:51.692802       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-apiserver [de4c09ce1728] <==
	* Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
	* I0310 19:22:30.941794       1 server.go:632] external host was not specified, using 192.168.49.97
	* I0310 19:22:30.943978       1 server.go:182] Version: v1.20.2
	* Error: failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use
	* 
	* ==> kube-controller-manager [ebd1c5c99c53] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xb1
	* crypto/tls.(*Conn).readFromUntil(0xc0009a9500, 0x4da5040, 0xc000130270, 0x5, 0xc000130270, 0x99)
	* 	/usr/local/go/src/crypto/tls/conn.go:801 +0xf3
	* crypto/tls.(*Conn).readRecordOrCCS(0xc0009a9500, 0x0, 0x0, 0xc00010dd18)
	* 	/usr/local/go/src/crypto/tls/conn.go:608 +0x115
	* crypto/tls.(*Conn).readRecord(...)
	* 	/usr/local/go/src/crypto/tls/conn.go:576
	* crypto/tls.(*Conn).Read(0xc0009a9500, 0xc000f55000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	* 	/usr/local/go/src/crypto/tls/conn.go:1252 +0x15f
	* bufio.(*Reader).Read(0xc0002b22a0, 0xc000f44118, 0x9, 0x9, 0xc00010dd18, 0x4905800, 0x9b77ab)
	* 	/usr/local/go/src/bufio/bufio.go:227 +0x222
	* io.ReadAtLeast(0x4d9eba0, 0xc0002b22a0, 0xc000f44118, 0x9, 0x9, 0x9, 0xc000116040, 0x0, 0x4d9efe0)
	* 	/usr/local/go/src/io/io.go:314 +0x87
	* io.ReadFull(...)
	* 	/usr/local/go/src/io/io.go:333
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000f44118, 0x9, 0x9, 0x4d9eba0, 0xc0002b22a0, 0x0, 0x0, 0xc00010ddd0, 0x46d045)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000f440e0, 0xc000c27260, 0x0, 0x0, 0x0)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00010dfa8, 0x0, 0x0)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1819 +0xd8
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0002d9500)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1741 +0x6f
	* created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5
	* 
	* ==> kube-controller-manager [f3f54b45a768] <==
	* I0310 19:23:28.436078       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	* I0310 19:23:28.436352       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	* I0310 19:23:28.441357       1 shared_informer.go:247] Caches are synced for endpoint 
	* I0310 19:23:28.450128       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	* I0310 19:23:28.450818       1 shared_informer.go:247] Caches are synced for taint 
	* I0310 19:23:28.456389       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* W0310 19:23:28.457020       1 node_lifecycle_controller.go:1044] Missing timestamp for Node functional-20210310191609-6496. Assuming now as a timestamp.
	* I0310 19:23:28.457177       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I0310 19:23:28.458014       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* I0310 19:23:28.458561       1 event.go:291] "Event occurred" object="functional-20210310191609-6496" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20210310191609-6496 event: Registered Node functional-20210310191609-6496 in Controller"
	* I0310 19:23:28.555127       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 19:23:28.565734       1 shared_informer.go:247] Caches are synced for daemon sets 
	* I0310 19:23:28.637981       1 shared_informer.go:247] Caches are synced for stateful set 
	* I0310 19:23:28.718801       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 19:23:29.020246       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 19:23:29.042833       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 19:23:29.043397       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 19:23:29.397178       1 request.go:655] Throttling request took 1.0411107s, request: GET:https://192.168.49.97:8441/apis/extensions/v1beta1?timeout=32s
	* I0310 19:23:30.199929       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	* I0310 19:23:30.200196       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 19:24:24.900844       1 event.go:291] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-6cbfcd7cbc to 1"
	* I0310 19:24:25.640038       1 event.go:291] "Event occurred" object="default/hello-node-6cbfcd7cbc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-6cbfcd7cbc-9qfsw"
	* I0310 19:24:28.072322       1 event.go:291] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-9bbbc5bbb to 1"
	* I0310 19:24:28.149255       1 event.go:291] "Event occurred" object="default/mysql-9bbbc5bbb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-9bbbc5bbb-fk6dk"
	* I0310 19:24:32.035825       1 event.go:291] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	* 
	* ==> kube-proxy [b384eab6bf96] <==
	* I0310 19:18:39.379890       1 node.go:172] Successfully retrieved node IP: 192.168.49.97
	* I0310 19:18:39.380392       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.97), assume IPv4 operation
	* W0310 19:18:39.563908       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 19:18:39.564100       1 server_others.go:185] Using iptables Proxier.
	* I0310 19:18:39.564841       1 server.go:650] Version: v1.20.2
	* I0310 19:18:39.565961       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 19:18:39.566749       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 19:18:39.566821       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 19:18:39.574907       1 config.go:315] Starting service config controller
	* I0310 19:18:39.574944       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 19:18:39.586846       1 config.go:224] Starting endpoint slice config controller
	* I0310 19:18:39.586873       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 19:18:39.587820       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 19:18:39.684897       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-proxy [bf1329e5bb9f] <==
	* E0310 19:22:13.875801       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210310191609-6496": dial tcp 192.168.49.97:8441: connect: connection refused
	* E0310 19:22:25.041931       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20210310191609-6496": net/http: TLS handshake timeout
	* I0310 19:22:31.466006       1 node.go:172] Successfully retrieved node IP: 192.168.49.97
	* I0310 19:22:31.466673       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.97), assume IPv4 operation
	* W0310 19:22:31.843059       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 19:22:31.843751       1 server_others.go:185] Using iptables Proxier.
	* I0310 19:22:31.844959       1 server.go:650] Version: v1.20.2
	* I0310 19:22:31.847801       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 19:22:31.867220       1 config.go:224] Starting endpoint slice config controller
	* I0310 19:22:31.867657       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 19:22:31.874672       1 config.go:315] Starting service config controller
	* I0310 19:22:31.874698       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 19:22:31.969491       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 19:22:32.054239       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-scheduler [c876fd7259dd] <==
	* W0310 19:22:31.252473       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W0310 19:22:31.252540       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W0310 19:22:31.252573       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W0310 19:22:31.252583       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I0310 19:22:31.534938       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 19:22:31.534999       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 19:22:31.485243       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I0310 19:22:31.545423       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I0310 19:22:31.737961       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* W0310 19:22:33.464955       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: very short watch: k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Unexpected watch close - watch lasted less than a second and no items received
	* W0310 19:22:33.471198       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSINode ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
	* E0310 19:22:34.269702       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.97:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=617": dial tcp 192.168.49.97:8441: connect: connection refused
	* E0310 19:22:34.414130       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.97:8441/apis/storage.k8s.io/v1/csinodes?resourceVersion=617": dial tcp 192.168.49.97:8441: connect: connection refused
	* E0310 19:22:36.919166       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.97:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=617": dial tcp 192.168.49.97:8441: connect: connection refused
	* E0310 19:22:37.552052       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.97:8441/apis/storage.k8s.io/v1/csinodes?resourceVersion=617": dial tcp 192.168.49.97:8441: connect: connection refused
	* E0310 19:22:41.411677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.97:8441/apis/storage.k8s.io/v1/csinodes?resourceVersion=617": dial tcp 192.168.49.97:8441: connect: connection refused
	* E0310 19:22:42.906175       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.97:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=617": dial tcp 192.168.49.97:8441: connect: connection refused
	* E0310 19:22:55.663962       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	* E0310 19:22:55.664913       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	* E0310 19:22:55.665940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	* E0310 19:22:55.671699       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 19:22:55.671865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	* E0310 19:22:55.672049       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	* E0310 19:22:55.672222       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	* E0310 19:22:55.672425       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	* 
	* ==> kube-scheduler [c88803c2d328] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 19:16:23 UTC, end at Wed 2021-03-10 19:25:11 UTC. --
	* Mar 10 19:23:13 functional-20210310191609-6496 kubelet[10200]: I0310 19:23:13.936613   10200 scope.go:95] [topologymanager] RemoveContainer - Container ID: ebd1c5c99c539f3674f709a69f623fdf7a4462baee474920ecab87cfa13c3f72
	* Mar 10 19:23:14 functional-20210310191609-6496 kubelet[10200]: I0310 19:23:14.937202   10200 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6d6ff635030096ee2cb4e0e5d86f840d72342a21bb551ae23ba5c16c48095058
	* Mar 10 19:23:23 functional-20210310191609-6496 kubelet[10200]: I0310 19:23:23.289247   10200 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4ae9d3937c41e449a01b6fe58c2157e7d9da26bf889ef5ba165ea54c5f601162
	* Mar 10 19:23:23 functional-20210310191609-6496 kubelet[10200]: E0310 19:23:23.382448   10200 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/49091e3619a1cee4ed2037692587ec3479c249a52bafcaefafa2629265afd828/diff" to get inode usage: stat /var/lib/docker/overlay2/49091e3619a1cee4ed2037692587ec3479c249a52bafcaefafa2629265afd828/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/4ae9d3937c41e449a01b6fe58c2157e7d9da26bf889ef5ba165ea54c5f601162" to get inode usage: stat /var/lib/docker/containers/4ae9d3937c41e449a01b6fe58c2157e7d9da26bf889ef5ba165ea54c5f601162: no such file or directory
	* Mar 10 19:23:23 functional-20210310191609-6496 kubelet[10200]: E0310 19:23:23.457135   10200 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/71effa54e4d404a57deec7146f12f38d534af98025f4e9f740996129966c9494/diff" to get inode usage: stat /var/lib/docker/overlay2/71effa54e4d404a57deec7146f12f38d534af98025f4e9f740996129966c9494/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/4ee3320a9a6789fc9d8c5c0f94c3c31604a33ddf2eb9c49fa4de995071a32acb" to get inode usage: stat /var/lib/docker/containers/4ee3320a9a6789fc9d8c5c0f94c3c31604a33ddf2eb9c49fa4de995071a32acb: no such file or directory
	* Mar 10 19:23:23 functional-20210310191609-6496 kubelet[10200]: E0310 19:23:23.474069   10200 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/ed4eb4ef7746ae9ca60acbef7ad5eb6584623e3a55535c9c7764f054c6abf59f/diff" to get inode usage: stat /var/lib/docker/overlay2/ed4eb4ef7746ae9ca60acbef7ad5eb6584623e3a55535c9c7764f054c6abf59f/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/33f203fb35ef142fafcf7ba273f5f4aaac38587b72c425c2f08b0d76f7b1c31e" to get inode usage: stat /var/lib/docker/containers/33f203fb35ef142fafcf7ba273f5f4aaac38587b72c425c2f08b0d76f7b1c31e: no such file or directory
	* Mar 10 19:24:25 functional-20210310191609-6496 kubelet[10200]: I0310 19:24:25.838881   10200 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 19:24:26 functional-20210310191609-6496 kubelet[10200]: I0310 19:24:26.037074   10200 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-flkwv" (UniqueName: "kubernetes.io/secret/95e18738-e4d4-4b7f-acef-874cc6b8a09c-default-token-flkwv") pod "hello-node-6cbfcd7cbc-9qfsw" (UID: "95e18738-e4d4-4b7f-acef-874cc6b8a09c")
	* Mar 10 19:24:27 functional-20210310191609-6496 kubelet[10200]: I0310 19:24:27.048837   10200 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 19:24:27 functional-20210310191609-6496 kubelet[10200]: I0310 19:24:27.297720   10200 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-flkwv" (UniqueName: "kubernetes.io/secret/e3f783f0-4c71-4d74-ab9d-d91e716af763-default-token-flkwv") pod "nginx-svc" (UID: "e3f783f0-4c71-4d74-ab9d-d91e716af763")
	* Mar 10 19:24:28 functional-20210310191609-6496 kubelet[10200]: I0310 19:24:28.573723   10200 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 19:24:28 functional-20210310191609-6496 kubelet[10200]: I0310 19:24:28.735651   10200 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-flkwv" (UniqueName: "kubernetes.io/secret/52ef5685-caa1-41a7-baad-5ef0345dab0c-default-token-flkwv") pod "mysql-9bbbc5bbb-fk6dk" (UID: "52ef5685-caa1-41a7-baad-5ef0345dab0c")
	* Mar 10 19:24:34 functional-20210310191609-6496 kubelet[10200]: I0310 19:24:34.939979   10200 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 19:24:35 functional-20210310191609-6496 kubelet[10200]: I0310 19:24:35.361436   10200 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-b036f589-c209-4bfe-a093-7337da0c3fc4" (UniqueName: "kubernetes.io/host-path/b61d403c-522c-49fc-9951-04d17371a179-pvc-b036f589-c209-4bfe-a093-7337da0c3fc4") pod "sp-pod" (UID: "b61d403c-522c-49fc-9951-04d17371a179")
	* Mar 10 19:24:35 functional-20210310191609-6496 kubelet[10200]: I0310 19:24:35.377257   10200 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-flkwv" (UniqueName: "kubernetes.io/secret/b61d403c-522c-49fc-9951-04d17371a179-default-token-flkwv") pod "sp-pod" (UID: "b61d403c-522c-49fc-9951-04d17371a179")
	* Mar 10 19:24:39 functional-20210310191609-6496 kubelet[10200]: W0310 19:24:39.946018   10200 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/mysql-9bbbc5bbb-fk6dk through plugin: invalid network status for
	* Mar 10 19:24:39 functional-20210310191609-6496 kubelet[10200]: W0310 19:24:39.974730   10200 pod_container_deletor.go:79] Container "1b801a20620287fc39ff53eb0f2f46aecf2928c237bd7baf2409d235dae8c6e0" not found in pod's containers
	* Mar 10 19:24:40 functional-20210310191609-6496 kubelet[10200]: W0310 19:24:40.038391   10200 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-node-6cbfcd7cbc-9qfsw through plugin: invalid network status for
	* Mar 10 19:24:40 functional-20210310191609-6496 kubelet[10200]: W0310 19:24:40.780661   10200 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
	* Mar 10 19:24:41 functional-20210310191609-6496 kubelet[10200]: W0310 19:24:41.756667   10200 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-node-6cbfcd7cbc-9qfsw through plugin: invalid network status for
	* Mar 10 19:24:43 functional-20210310191609-6496 kubelet[10200]: W0310 19:24:43.783616   10200 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sp-pod through plugin: invalid network status for
	* Mar 10 19:24:43 functional-20210310191609-6496 kubelet[10200]: W0310 19:24:43.837595   10200 pod_container_deletor.go:79] Container "8fd42e0fcf7e39a6a31ed1d911eded74d6d2a4df8a837dd9eadbb7fdd75800f9" not found in pod's containers
	* Mar 10 19:24:43 functional-20210310191609-6496 kubelet[10200]: W0310 19:24:43.999026   10200 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/mysql-9bbbc5bbb-fk6dk through plugin: invalid network status for
	* Mar 10 19:24:44 functional-20210310191609-6496 kubelet[10200]: W0310 19:24:44.090445   10200 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
	* Mar 10 19:24:45 functional-20210310191609-6496 kubelet[10200]: W0310 19:24:45.443317   10200 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sp-pod through plugin: invalid network status for
	* 
	* ==> storage-provisioner [6d6ff6350300] <==
	* I0310 19:22:48.543561       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* F0310 19:22:48.550608       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	* 
	* ==> storage-provisioner [6fa71db88a44] <==
	* I0310 19:23:15.479570       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 19:23:15.572617       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 19:23:15.574953       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 19:23:33.163425       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 19:23:33.166468       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ebd69c14-d580-42f3-81ac-15faea000e2e", APIVersion:"v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210310191609-6496_24713bc6-1c2a-4e47-a725-414dd293d655 became leader
	* I0310 19:23:33.166551       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210310191609-6496_24713bc6-1c2a-4e47-a725-414dd293d655!
	* I0310 19:23:33.267375       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_functional-20210310191609-6496_24713bc6-1c2a-4e47-a725-414dd293d655!
	* I0310 19:24:32.182804       1 controller.go:1284] provision "default/myclaim" class "standard": started
	* I0310 19:24:32.256027       1 storage_provisioner.go:60] Provisioning volume {&StorageClass{ObjectMeta:{standard    86def7d9-3168-4ecb-abc0-3bc53a9b02dc 447 0 2021-03-10 19:18:39 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	*  storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2021-03-10 19:18:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 44 34 102 58 115 116 111 114 97 103 101 99 108 97 115 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 105 115 45 100 101 102 97 117 108 116 45 99 108 97 115 115 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 111 110 109 97 110 97 103 101 114 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 111 100 101 34 58 123 125 125 125 44 34 102 58 112 114 111 118 105 115 105 111 110 101 114 34 58 123 125 44 34 102 58 114 101 99 108
97 105 109 80 111 108 105 99 121 34 58 123 125 44 34 102 58 118 111 108 117 109 101 66 105 110 100 105 110 103 77 111 100 101 34 58 123 125 125],}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-b036f589-c209-4bfe-a093-7337da0c3fc4 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  b036f589-c209-4bfe-a093-7337da0c3fc4 820 0 2021-03-10 19:24:31 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	*  volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2021-03-10 19:24:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 118 111 108 117 109 101 46 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 112 114 111 118 105 115 105 111 110 101 114 34 58 123 125 125 125 125],}} {kubectl-client-side-apply Update v1 2021-03-10 19:24:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58
97 99 99 101 115 115 77 111 100 101 115 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 34 102 58 114 101 113 117 101 115 116 115 34 58 123 34 46 34 58 123 125 44 34 102 58 115 116 111 114 97 103 101 34 58 123 125 125 125 44 34 102 58 118 111 108 117 109 101 77 111 100 101 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 112 104 97 115 101 34 58 123 125 125 125],}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	* I0310 19:24:32.292248       1 controller.go:1392] provision "default/myclaim" class "standard": volume "pvc-b036f589-c209-4bfe-a093-7337da0c3fc4" provisioned
	* I0310 19:24:32.292317       1 controller.go:1409] provision "default/myclaim" class "standard": succeeded
	* I0310 19:24:32.292352       1 volume_store.go:212] Trying to save persistentvolume "pvc-b036f589-c209-4bfe-a093-7337da0c3fc4"
	* I0310 19:24:32.294444       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b036f589-c209-4bfe-a093-7337da0c3fc4", APIVersion:"v1", ResourceVersion:"820", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	* I0310 19:24:32.470816       1 volume_store.go:219] persistentvolume "pvc-b036f589-c209-4bfe-a093-7337da0c3fc4" saved
	* I0310 19:24:32.471896       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b036f589-c209-4bfe-a093-7337da0c3fc4", APIVersion:"v1", ResourceVersion:"820", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b036f589-c209-4bfe-a093-7337da0c3fc4
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|--------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |            Profile             |          User           | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|--------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:24 GMT | Wed, 10 Mar 2021 19:24:24 GMT |
	|         | config unset cpus                      |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:24 GMT | Wed, 10 Mar 2021 19:24:25 GMT |
	|         | addons list                            |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:25 GMT | Wed, 10 Mar 2021 19:24:25 GMT |
	|         | config set cpus 2                      |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:26 GMT | Wed, 10 Mar 2021 19:24:26 GMT |
	|         | addons list -o json                    |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:26 GMT | Wed, 10 Mar 2021 19:24:26 GMT |
	|         | config get cpus                        |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:27 GMT | Wed, 10 Mar 2021 19:24:27 GMT |
	|         | config unset cpus                      |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:24 GMT | Wed, 10 Mar 2021 19:24:29 GMT |
	|         | ssh echo hello                         |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:24 GMT | Wed, 10 Mar 2021 19:24:29 GMT |
	|         | ssh sudo cat                           |                                |                         |         |                               |                               |
	|         | /etc/ssl/certs/6496.pem                |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:24 GMT | Wed, 10 Mar 2021 19:24:29 GMT |
	|         | ssh sudo cat                           |                                |                         |         |                               |                               |
	|         | /etc/test/nested/copy/6496/hosts       |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:29 GMT | Wed, 10 Mar 2021 19:24:34 GMT |
	|         | ssh cat /etc/hostname                  |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:30 GMT | Wed, 10 Mar 2021 19:24:34 GMT |
	|         | ssh sudo cat                           |                                |                         |         |                               |                               |
	|         | /usr/share/ca-certificates/6496.pem    |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:34 GMT | Wed, 10 Mar 2021 19:24:38 GMT |
	|         | ssh sudo cat                           |                                |                         |         |                               |                               |
	|         | /etc/ssl/certs/51391683.0              |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:34 GMT | Wed, 10 Mar 2021 19:24:41 GMT |
	|         | docker-env                             |                                |                         |         |                               |                               |
	| profile | list --output json                     | minikube                       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:39 GMT | Wed, 10 Mar 2021 19:24:43 GMT |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:30 GMT | Wed, 10 Mar 2021 19:24:44 GMT |
	|         | image load                             |                                |                         |         |                               |                               |
	|         | busybox:functional-20210310191609-6496 |                                |                         |         |                               |                               |
	| profile | list                                   | minikube                       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:44 GMT | Wed, 10 Mar 2021 19:24:49 GMT |
	| ssh     | -p functional-20210310191609-6496      | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:45 GMT | Wed, 10 Mar 2021 19:24:49 GMT |
	|         | -- docker image inspect                |                                |                         |         |                               |                               |
	|         | busybox:functional-20210310191609-6496 |                                |                         |         |                               |                               |
	| profile | list -l                                | minikube                       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:49 GMT | Wed, 10 Mar 2021 19:24:49 GMT |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:47 GMT | Wed, 10 Mar 2021 19:24:53 GMT |
	|         | docker-env                             |                                |                         |         |                               |                               |
	| profile | list -o json                           | minikube                       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:50 GMT | Wed, 10 Mar 2021 19:24:55 GMT |
	| profile | list -o json --light                   | minikube                       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:55 GMT | Wed, 10 Mar 2021 19:24:55 GMT |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:54 GMT | Wed, 10 Mar 2021 19:24:55 GMT |
	|         | update-context                         |                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=2                 |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:56 GMT | Wed, 10 Mar 2021 19:24:57 GMT |
	|         | update-context                         |                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=2                 |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:56 GMT | Wed, 10 Mar 2021 19:24:57 GMT |
	|         | update-context                         |                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=2                 |                                |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496         | functional-20210310191609-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:30 GMT | Wed, 10 Mar 2021 19:25:00 GMT |
	|         | logs                                   |                                |                         |         |                               |                               |
	|---------|----------------------------------------|--------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 19:24:49
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 19:24:49.377346    5180 out.go:239] Setting OutFile to fd 2076 ...
	* I0310 19:24:49.378329    5180 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 19:24:49.378329    5180 out.go:252] Setting ErrFile to fd 2292...
	* I0310 19:24:49.378329    5180 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 19:24:49.406230    5180 out.go:246] Setting JSON to false
	* I0310 19:24:49.426987    5180 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":29755,"bootTime":1615374534,"procs":115,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 19:24:49.426987    5180 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 19:24:49.433498    5180 out.go:129] * [functional-20210310191609-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 19:24:49.438262    5180 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 19:24:49.440243    5180 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 19:24:50.159876    5180 docker.go:119] docker version: linux-20.10.2
	* I0310 19:24:50.168973    5180 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 19:24:51.410608    5180 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2413753s)
	* I0310 19:24:51.412428    5180 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:51 SystemTime:2021-03-10 19:24:50.8152156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 19:24:51.421994    5180 out.go:129] * Using the docker driver based on existing profile
	* I0310 19:24:51.422745    5180 start.go:276] selected driver: docker
	* I0310 19:24:51.422745    5180 start.go:718] validating driver "docker" against &{Name:functional-20210310191609-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:functional-20210310191609-6496 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8441 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluste
r:false volumesnapshots:false] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 19:24:51.423182    5180 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 19:24:51.442441    5180 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 19:24:52.665732    5180 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2230757s)
	* I0310 19:24:52.666443    5180 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:51 SystemTime:2021-03-10 19:24:52.2054706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 19:24:53.797614    5180 start_flags.go:398] config:
	* {Name:functional-20210310191609-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:functional-20210310191609-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8441 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true apps_running:
true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 19:25:12.204844    6852 out.go:335] unable to parse "*  volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2021-03-10 19:24:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 118 111 108 117 109 101 46 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 112 114 111 118 105 115 105 111 110 101 114 34 58 123 125 125 125 125],}} {kubectl-client-side-apply Update v1 2021-03-10 19:24:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123
125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 97 99 99 101 115 115 77 111 100 101 115 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 34 102 58 114 101 113 117 101 115 116 115 34 58 123 34 46 34 58 123 125 44 34 102 58 115 116 111 114 97 103 101 34 58 123 125 125 125 44 34 102 58 118 111 108 117 109 101 77 111 100 101 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 112 104 97 115 101 34 58 123 125 125 125],}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim\n": template: *  volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-
hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2021-03-10 19:24:31 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 118 111 108 117 109 101 46 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 112 114 111 118 105 115 105 111 110 101 114 34 58 123 125 125 125 125],}} {kubectl-client-side-apply Update v1 2021-03-10 19:24:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 97 99 99 101 115 115 77 111 100 101 115 34 58 123 125 44 34 102 58 114 10
1 115 111 117 114 99 101 115 34 58 123 34 102 58 114 101 113 117 101 115 116 115 34 58 123 34 46 34 58 123 125 44 34 102 58 115 116 111 114 97 103 101 34 58 123 125 125 125 44 34 102 58 118 111 108 117 109 101 77 111 100 101 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 112 104 97 115 101 34 58 123 125 125 125],}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	:1: unexpected "}" in operand - returning raw string.
	E0310 19:25:12.307520    6852 out.go:335] unable to parse "* I0310 19:24:50.168973    5180 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 19:24:50.168973    5180 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 19:25:12.313636    6852 out.go:335] unable to parse "* I0310 19:24:51.410608    5180 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.2413753s)\n": template: * I0310 19:24:51.410608    5180 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2413753s)
	:1: function "json" not defined - returning raw string.
	E0310 19:25:12.333515    6852 out.go:335] unable to parse "* I0310 19:24:51.442441    5180 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 19:24:51.442441    5180 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 19:25:12.339498    6852 out.go:335] unable to parse "* I0310 19:24:52.665732    5180 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.2230757s)\n": template: * I0310 19:24:52.665732    5180 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2230757s)
	:1: function "json" not defined - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20210310191609-6496 -n functional-20210310191609-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20210310191609-6496 -n functional-20210310191609-6496: (3.3445968s)
helpers_test.go:257: (dbg) Run:  kubectl --context functional-20210310191609-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: hello-node-6cbfcd7cbc-9qfsw mysql-9bbbc5bbb-fk6dk nginx-svc sp-pod
helpers_test.go:265: ======> post-mortem[TestFunctional/parallel/DockerEnv]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context functional-20210310191609-6496 describe pod hello-node-6cbfcd7cbc-9qfsw mysql-9bbbc5bbb-fk6dk nginx-svc sp-pod
helpers_test.go:273: (dbg) kubectl --context functional-20210310191609-6496 describe pod hello-node-6cbfcd7cbc-9qfsw mysql-9bbbc5bbb-fk6dk nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:           hello-node-6cbfcd7cbc-9qfsw
	Namespace:      default
	Priority:       0
	Node:           functional-20210310191609-6496/192.168.49.97
	Start Time:     Wed, 10 Mar 2021 19:24:25 +0000
	Labels:         app=hello-node
	                pod-template-hash=6cbfcd7cbc
	Annotations:    <none>
	Status:         Pending
	IP:             
	IPs:            <none>
	Controlled By:  ReplicaSet/hello-node-6cbfcd7cbc
	Containers:
	  echoserver:
	    Container ID:   
	    Image:          k8s.gcr.io/echoserver:1.8
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-flkwv (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  default-token-flkwv:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-flkwv
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  50s   default-scheduler  Successfully assigned default/hello-node-6cbfcd7cbc-9qfsw to functional-20210310191609-6496
	  Normal  Pulling    36s   kubelet            Pulling image "k8s.gcr.io/echoserver:1.8"
	
	
	Name:           mysql-9bbbc5bbb-fk6dk
	Namespace:      default
	Priority:       0
	Node:           functional-20210310191609-6496/192.168.49.97
	Start Time:     Wed, 10 Mar 2021 19:24:28 +0000
	Labels:         app=mysql
	                pod-template-hash=9bbbc5bbb
	Annotations:    <none>
	Status:         Pending
	IP:             
	IPs:            <none>
	Controlled By:  ReplicaSet/mysql-9bbbc5bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-flkwv (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  default-token-flkwv:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-flkwv
	    Optional:    false
	QoS Class:       Burstable
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  48s   default-scheduler  Successfully assigned default/mysql-9bbbc5bbb-fk6dk to functional-20210310191609-6496
	  Normal  Pulling    36s   kubelet            Pulling image "mysql:5.7"
	
	
	Name:         nginx-svc
	Namespace:    default
	Priority:     0
	Node:         functional-20210310191609-6496/192.168.49.97
	Start Time:   Wed, 10 Mar 2021 19:24:27 +0000
	Labels:       run=nginx-svc
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-flkwv (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  default-token-flkwv:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-flkwv
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  49s   default-scheduler  Successfully assigned default/nginx-svc to functional-20210310191609-6496
	  Normal  Pulling    35s   kubelet            Pulling image "nginx:alpine"
	
	
	Name:         sp-pod
	Namespace:    default
	Priority:     0
	Node:         functional-20210310191609-6496/192.168.49.97
	Start Time:   Wed, 10 Mar 2021 19:24:34 +0000
	Labels:       test=storage-provisioner
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-flkwv (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  default-token-flkwv:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-flkwv
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  41s   default-scheduler  Successfully assigned default/sp-pod to functional-20210310191609-6496
	  Normal  Pulling    33s   kubelet            Pulling image "nginx"

                                                
                                                
-- /stdout --
helpers_test.go:276: <<< TestFunctional/parallel/DockerEnv FAILED: end of post-mortem logs <<<
helpers_test.go:277: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/DockerEnv (42.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (380.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:258: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:268: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210310194323-6496 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:268: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20210310194323-6496 --wait=true -v=8 --alsologtostderr --driver=docker: exit status 80 (5m53.6239882s)

                                                
                                                
-- stdout --
	* [multinode-20210310194323-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20210310194323-6496 in cluster multinode-20210310194323-6496
	* Restarting existing docker container for "multinode-20210310194323-6496" ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 19:53:06.460346    3088 out.go:239] Setting OutFile to fd 3012 ...
	I0310 19:53:06.461366    3088 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 19:53:06.461366    3088 out.go:252] Setting ErrFile to fd 2592...
	I0310 19:53:06.461366    3088 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 19:53:06.473364    3088 out.go:246] Setting JSON to false
	I0310 19:53:06.475367    3088 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":31452,"bootTime":1615374534,"procs":108,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 19:53:06.475367    3088 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 19:53:06.482307    3088 out.go:129] * [multinode-20210310194323-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 19:53:06.487244    3088 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 19:53:06.491719    3088 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 19:53:06.900224    3088 docker.go:119] docker version: linux-20.10.2
	I0310 19:53:06.909443    3088 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 19:53:07.697182    3088 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:46 SystemTime:2021-03-10 19:53:07.3321994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 19:53:07.702222    3088 out.go:129] * Using the docker driver based on existing profile
	I0310 19:53:07.702496    3088 start.go:276] selected driver: docker
	I0310 19:53:07.702496    3088 start.go:718] validating driver "docker" against &{Name:multinode-20210310194323-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-20210310194323-6496 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	I0310 19:53:07.702496    3088 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 19:53:07.722578    3088 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 19:53:08.461485    3088 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:46 SystemTime:2021-03-10 19:53:08.1444659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 19:53:10.280449    3088 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 19:53:10.280449    3088 start_flags.go:398] config:
	{Name:multinode-20210310194323-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-20210310194323-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	I0310 19:53:10.284476    3088 out.go:129] * Starting control plane node multinode-20210310194323-6496 in cluster multinode-20210310194323-6496
	I0310 19:53:10.776656    3088 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 19:53:10.777152    3088 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 19:53:10.777152    3088 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 19:53:10.777152    3088 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 19:53:10.777600    3088 cache.go:54] Caching tarball of preloaded images
	I0310 19:53:10.778036    3088 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 19:53:10.778036    3088 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 19:53:10.778409    3088 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\config.json ...
	I0310 19:53:10.783007    3088 cache.go:185] Successfully downloaded all kic artifacts
	I0310 19:53:10.783378    3088 start.go:313] acquiring machines lock for multinode-20210310194323-6496: {Name:mkc0311afbbefcdbd0a19dc4fb181202ea9bd5e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:53:10.783799    3088 start.go:317] acquired machines lock for "multinode-20210310194323-6496" in 420.4??s
	I0310 19:53:10.783799    3088 start.go:93] Skipping create...Using existing machine configuration
	I0310 19:53:10.784170    3088 fix.go:55] fixHost starting: 
	I0310 19:53:10.799524    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format={{.State.Status}}
	I0310 19:53:11.279536    3088 fix.go:108] recreateIfNeeded on multinode-20210310194323-6496: state=Stopped err=<nil>
	W0310 19:53:11.280031    3088 fix.go:134] unexpected machine state, will restart: <nil>
	I0310 19:53:11.283170    3088 out.go:129] * Restarting existing docker container for "multinode-20210310194323-6496" ...
	I0310 19:53:11.291628    3088 cli_runner.go:115] Run: docker start multinode-20210310194323-6496
	I0310 19:53:12.921527    3088 cli_runner.go:168] Completed: docker start multinode-20210310194323-6496: (1.6299017s)
	I0310 19:53:12.929715    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format={{.State.Status}}
	I0310 19:53:13.478245    3088 kic.go:410] container "multinode-20210310194323-6496" state is running.
	I0310 19:53:13.492460    3088 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210310194323-6496
	I0310 19:53:14.048777    3088 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\config.json ...
	I0310 19:53:14.052908    3088 machine.go:88] provisioning docker machine ...
	I0310 19:53:14.053572    3088 ubuntu.go:169] provisioning hostname "multinode-20210310194323-6496"
	I0310 19:53:14.065366    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:14.572551    3088 main.go:121] libmachine: Using SSH client type: native
	I0310 19:53:14.575546    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	I0310 19:53:14.575747    3088 main.go:121] libmachine: About to run SSH command:
	sudo hostname multinode-20210310194323-6496 && echo "multinode-20210310194323-6496" | sudo tee /etc/hostname
	I0310 19:53:14.856181    3088 main.go:121] libmachine: SSH cmd err, output: <nil>: multinode-20210310194323-6496
	
	I0310 19:53:14.863867    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:15.366157    3088 main.go:121] libmachine: Using SSH client type: native
	I0310 19:53:15.366492    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	I0310 19:53:15.366492    3088 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210310194323-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210310194323-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210310194323-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 19:53:15.603102    3088 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 19:53:15.603102    3088 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 19:53:15.603102    3088 ubuntu.go:177] setting up certificates
	I0310 19:53:15.603102    3088 provision.go:83] configureAuth start
	I0310 19:53:15.616266    3088 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210310194323-6496
	I0310 19:53:16.087692    3088 provision.go:137] copyHostCerts
	I0310 19:53:16.088374    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\key.pem -> C:\Users\jenkins\.minikube/key.pem
	I0310 19:53:16.088991    3088 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 19:53:16.088991    3088 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 19:53:16.089466    3088 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 19:53:16.092727    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> C:\Users\jenkins\.minikube/ca.pem
	I0310 19:53:16.093059    3088 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 19:53:16.093059    3088 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 19:53:16.093498    3088 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 19:53:16.097086    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\cert.pem -> C:\Users\jenkins\.minikube/cert.pem
	I0310 19:53:16.097409    3088 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 19:53:16.097409    3088 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 19:53:16.097803    3088 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 19:53:16.101322    3088 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.multinode-20210310194323-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210310194323-6496]
	I0310 19:53:16.546945    3088 provision.go:165] copyRemoteCerts
	I0310 19:53:16.569677    3088 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 19:53:16.584994    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:17.039604    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	I0310 19:53:17.184343    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0310 19:53:17.185568    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1261 bytes)
	I0310 19:53:17.236048    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0310 19:53:17.236502    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 19:53:17.288357    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0310 19:53:17.289452    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 19:53:17.343661    3088 provision.go:86] duration metric: configureAuth took 1.7405618s
	I0310 19:53:17.343661    3088 ubuntu.go:193] setting minikube options for container-runtime
	I0310 19:53:17.353945    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:17.808748    3088 main.go:121] libmachine: Using SSH client type: native
	I0310 19:53:17.809160    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	I0310 19:53:17.809465    3088 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 19:53:18.029581    3088 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 19:53:18.029581    3088 ubuntu.go:71] root file system type: overlay
	I0310 19:53:18.029971    3088 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 19:53:18.038079    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:18.532320    3088 main.go:121] libmachine: Using SSH client type: native
	I0310 19:53:18.533204    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	I0310 19:53:18.533490    3088 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 19:53:18.768030    3088 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 19:53:18.778054    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:19.267592    3088 main.go:121] libmachine: Using SSH client type: native
	I0310 19:53:19.267977    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	I0310 19:53:19.267977    3088 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 19:53:19.499069    3088 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 19:53:19.499283    3088 machine.go:91] provisioned docker machine in 5.446384s
	I0310 19:53:19.499283    3088 start.go:267] post-start starting for "multinode-20210310194323-6496" (driver="docker")
	I0310 19:53:19.499283    3088 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 19:53:19.512851    3088 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 19:53:19.522702    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:19.982498    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	I0310 19:53:20.135431    3088 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 19:53:20.151414    3088 command_runner.go:124] > NAME="Ubuntu"
	I0310 19:53:20.152300    3088 command_runner.go:124] > VERSION="20.04.1 LTS (Focal Fossa)"
	I0310 19:53:20.152300    3088 command_runner.go:124] > ID=ubuntu
	I0310 19:53:20.152300    3088 command_runner.go:124] > ID_LIKE=debian
	I0310 19:53:20.152300    3088 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.1 LTS"
	I0310 19:53:20.152300    3088 command_runner.go:124] > VERSION_ID="20.04"
	I0310 19:53:20.152300    3088 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0310 19:53:20.152300    3088 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0310 19:53:20.152300    3088 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0310 19:53:20.152300    3088 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0310 19:53:20.152300    3088 command_runner.go:124] > VERSION_CODENAME=focal
	I0310 19:53:20.152300    3088 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0310 19:53:20.152671    3088 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 19:53:20.152671    3088 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 19:53:20.152671    3088 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 19:53:20.152671    3088 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 19:53:20.152671    3088 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 19:53:20.153042    3088 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 19:53:20.155817    3088 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 19:53:20.155817    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> /etc/test/nested/copy/2512/hosts
	I0310 19:53:20.157097    3088 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 19:53:20.157097    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> /etc/test/nested/copy/4452/hosts
	I0310 19:53:20.171870    3088 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 19:53:20.199848    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 19:53:20.255203    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 19:53:20.310037    3088 start.go:270] post-start completed in 810.7555ms
	I0310 19:53:20.324263    3088 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 19:53:20.333096    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:20.810057    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	I0310 19:53:20.959921    3088 command_runner.go:124] > 22%
	I0310 19:53:20.959921    3088 fix.go:57] fixHost completed within 10.1761386s
	I0310 19:53:20.959921    3088 start.go:80] releasing machines lock for "multinode-20210310194323-6496", held for 10.1761386s
	I0310 19:53:20.967686    3088 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210310194323-6496
	I0310 19:53:21.447462    3088 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 19:53:21.457187    3088 ssh_runner.go:149] Run: systemctl --version
	I0310 19:53:21.458347    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:21.465494    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:21.937385    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	I0310 19:53:21.944494    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	I0310 19:53:22.095359    3088 command_runner.go:124] > systemd 245 (245.4-4ubuntu3.4)
	I0310 19:53:22.095899    3088 command_runner.go:124] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0310 19:53:22.108547    3088 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 19:53:22.188280    3088 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0310 19:53:22.189272    3088 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0310 19:53:22.189272    3088 command_runner.go:124] > <H1>302 Moved</H1>
	I0310 19:53:22.189272    3088 command_runner.go:124] > The document has moved
	I0310 19:53:22.189272    3088 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0310 19:53:22.189272    3088 command_runner.go:124] > </BODY></HTML>
	I0310 19:53:22.209824    3088 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 19:53:22.246160    3088 command_runner.go:124] > # /lib/systemd/system/docker.service
	I0310 19:53:22.247253    3088 command_runner.go:124] > [Unit]
	I0310 19:53:22.247253    3088 command_runner.go:124] > Description=Docker Application Container Engine
	I0310 19:53:22.247253    3088 command_runner.go:124] > Documentation=https://docs.docker.com
	I0310 19:53:22.247253    3088 command_runner.go:124] > BindsTo=containerd.service
	I0310 19:53:22.247253    3088 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	I0310 19:53:22.247253    3088 command_runner.go:124] > Wants=network-online.target
	I0310 19:53:22.247253    3088 command_runner.go:124] > Requires=docker.socket
	I0310 19:53:22.247253    3088 command_runner.go:124] > StartLimitBurst=3
	I0310 19:53:22.247253    3088 command_runner.go:124] > StartLimitIntervalSec=60
	I0310 19:53:22.247253    3088 command_runner.go:124] > [Service]
	I0310 19:53:22.247253    3088 command_runner.go:124] > Type=notify
	I0310 19:53:22.247253    3088 command_runner.go:124] > Restart=on-failure
	I0310 19:53:22.247390    3088 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0310 19:53:22.247390    3088 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0310 19:53:22.247390    3088 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	I0310 19:53:22.247390    3088 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	I0310 19:53:22.247390    3088 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0310 19:53:22.247390    3088 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	I0310 19:53:22.247390    3088 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0310 19:53:22.247390    3088 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0310 19:53:22.247390    3088 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0310 19:53:22.247390    3088 command_runner.go:124] > ExecStart=
	I0310 19:53:22.247717    3088 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0310 19:53:22.247717    3088 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	I0310 19:53:22.247717    3088 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0310 19:53:22.247717    3088 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0310 19:53:22.247717    3088 command_runner.go:124] > LimitNOFILE=infinity
	I0310 19:53:22.247717    3088 command_runner.go:124] > LimitNPROC=infinity
	I0310 19:53:22.247717    3088 command_runner.go:124] > LimitCORE=infinity
	I0310 19:53:22.247717    3088 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	I0310 19:53:22.247717    3088 command_runner.go:124] > # Only systemd 226 and above support this version.
	I0310 19:53:22.247717    3088 command_runner.go:124] > TasksMax=infinity
	I0310 19:53:22.247717    3088 command_runner.go:124] > TimeoutStartSec=0
	I0310 19:53:22.247717    3088 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0310 19:53:22.247717    3088 command_runner.go:124] > Delegate=yes
	I0310 19:53:22.247717    3088 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	I0310 19:53:22.247717    3088 command_runner.go:124] > KillMode=process
	I0310 19:53:22.247717    3088 command_runner.go:124] > [Install]
	I0310 19:53:22.247717    3088 command_runner.go:124] > WantedBy=multi-user.target
	I0310 19:53:22.248052    3088 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 19:53:22.258706    3088 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 19:53:22.298037    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 19:53:22.339651    3088 command_runner.go:124] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0310 19:53:22.339651    3088 command_runner.go:124] > image-endpoint: unix:///var/run/dockershim.sock
	I0310 19:53:22.355029    3088 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 19:53:22.393868    3088 command_runner.go:124] > # /lib/systemd/system/docker.service
	I0310 19:53:22.394213    3088 command_runner.go:124] > [Unit]
	I0310 19:53:22.394213    3088 command_runner.go:124] > Description=Docker Application Container Engine
	I0310 19:53:22.394213    3088 command_runner.go:124] > Documentation=https://docs.docker.com
	I0310 19:53:22.394213    3088 command_runner.go:124] > BindsTo=containerd.service
	I0310 19:53:22.394213    3088 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	I0310 19:53:22.394213    3088 command_runner.go:124] > Wants=network-online.target
	I0310 19:53:22.394213    3088 command_runner.go:124] > Requires=docker.socket
	I0310 19:53:22.394213    3088 command_runner.go:124] > StartLimitBurst=3
	I0310 19:53:22.394213    3088 command_runner.go:124] > StartLimitIntervalSec=60
	I0310 19:53:22.394213    3088 command_runner.go:124] > [Service]
	I0310 19:53:22.394213    3088 command_runner.go:124] > Type=notify
	I0310 19:53:22.394213    3088 command_runner.go:124] > Restart=on-failure
	I0310 19:53:22.394414    3088 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0310 19:53:22.394414    3088 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0310 19:53:22.394414    3088 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	I0310 19:53:22.394414    3088 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	I0310 19:53:22.394657    3088 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0310 19:53:22.394657    3088 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	I0310 19:53:22.394657    3088 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0310 19:53:22.394657    3088 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0310 19:53:22.394657    3088 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0310 19:53:22.394657    3088 command_runner.go:124] > ExecStart=
	I0310 19:53:22.394657    3088 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0310 19:53:22.394657    3088 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	I0310 19:53:22.395254    3088 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0310 19:53:22.395254    3088 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0310 19:53:22.395254    3088 command_runner.go:124] > LimitNOFILE=infinity
	I0310 19:53:22.395254    3088 command_runner.go:124] > LimitNPROC=infinity
	I0310 19:53:22.395254    3088 command_runner.go:124] > LimitCORE=infinity
	I0310 19:53:22.395254    3088 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	I0310 19:53:22.395529    3088 command_runner.go:124] > # Only systemd 226 and above support this version.
	I0310 19:53:22.395529    3088 command_runner.go:124] > TasksMax=infinity
	I0310 19:53:22.395529    3088 command_runner.go:124] > TimeoutStartSec=0
	I0310 19:53:22.395529    3088 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0310 19:53:22.395529    3088 command_runner.go:124] > Delegate=yes
	I0310 19:53:22.395529    3088 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	I0310 19:53:22.395529    3088 command_runner.go:124] > KillMode=process
	I0310 19:53:22.395529    3088 command_runner.go:124] > [Install]
	I0310 19:53:22.395529    3088 command_runner.go:124] > WantedBy=multi-user.target
	I0310 19:53:22.411160    3088 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 19:53:22.609386    3088 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 19:53:22.651578    3088 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 19:53:22.800206    3088 command_runner.go:124] > 20.10.3
	I0310 19:53:22.804968    3088 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 19:53:22.813122    3088 cli_runner.go:115] Run: docker exec -t multinode-20210310194323-6496 dig +short host.docker.internal
	I0310 19:53:23.523919    3088 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 19:53:23.536231    3088 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 19:53:23.556431    3088 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 19:53:23.600727    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:24.076480    3088 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 19:53:24.077030    3088 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 19:53:24.086261    3088 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 19:53:24.196012    3088 command_runner.go:124] > kindest/kindnetd:v20210220-5b7e6d01
	I0310 19:53:24.196012    3088 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.20.2
	I0310 19:53:24.196012    3088 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.20.2
	I0310 19:53:24.196012    3088 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.20.2
	I0310 19:53:24.196012    3088 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.20.2
	I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210105233232-2512
	I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106002159-6856
	I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106011107-6492
	I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106215525-1984
	I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107002220-9088
	I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107190945-8748
	I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210112045103-7160
	I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210114204234-6692
	I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115023213-8464
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115191024-3516
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210119220838-6552
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120022529-1140
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120175851-7432
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120214442-10992
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120231122-7024
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210123004019-5372
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210126212539-5172
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210128021318-232
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210212145109-352
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210213143925-7440
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219145454-9520
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219220622-3920
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210220004129-7452
	I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210224014800-800
	I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210225231842-5736
	I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210301195830-5700
	I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210303214129-4588
	I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304002630-1156
	I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304184021-4052
	I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210306072141-12056
	I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210308233820-5396
	I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210309234032-4944
	I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310083645-5040
	I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310191609-6496
	I0310 19:53:24.196761    3088 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	I0310 19:53:24.196761    3088 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 19:53:24.196761    3088 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	I0310 19:53:24.196761    3088 command_runner.go:124] > k8s.gcr.io/coredns:1.7.0
	I0310 19:53:24.196761    3088 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	I0310 19:53:24.196761    3088 command_runner.go:124] > k8s.gcr.io/pause:3.2
	I0310 19:53:24.197031    3088 docker.go:423] Got preloaded images: -- stdout --
	kindest/kindnetd:v20210220-5b7e6d01
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	minikube-local-cache-test:functional-20210105233232-2512
	minikube-local-cache-test:functional-20210106002159-6856
	minikube-local-cache-test:functional-20210106011107-6492
	minikube-local-cache-test:functional-20210106215525-1984
	minikube-local-cache-test:functional-20210107002220-9088
	minikube-local-cache-test:functional-20210107190945-8748
	minikube-local-cache-test:functional-20210112045103-7160
	minikube-local-cache-test:functional-20210114204234-6692
	minikube-local-cache-test:functional-20210115023213-8464
	minikube-local-cache-test:functional-20210115191024-3516
	minikube-local-cache-test:functional-20210119220838-6552
	minikube-local-cache-test:functional-20210120022529-1140
	minikube-local-cache-test:functional-20210120175851-7432
	minikube-local-cache-test:functional-20210120214442-10992
	minikube-local-cache-test:functional-20210120231122-7024
	minikube-local-cache-test:functional-20210123004019-5372
	minikube-local-cache-test:functional-20210126212539-5172
	minikube-local-cache-test:functional-20210128021318-232
	minikube-local-cache-test:functional-20210212145109-352
	minikube-local-cache-test:functional-20210213143925-7440
	minikube-local-cache-test:functional-20210219145454-9520
	minikube-local-cache-test:functional-20210219220622-3920
	minikube-local-cache-test:functional-20210220004129-7452
	minikube-local-cache-test:functional-20210224014800-800
	minikube-local-cache-test:functional-20210225231842-5736
	minikube-local-cache-test:functional-20210301195830-5700
	minikube-local-cache-test:functional-20210303214129-4588
	minikube-local-cache-test:functional-20210304002630-1156
	minikube-local-cache-test:functional-20210304184021-4052
	minikube-local-cache-test:functional-20210306072141-12056
	minikube-local-cache-test:functional-20210308233820-5396
	minikube-local-cache-test:functional-20210309234032-4944
	minikube-local-cache-test:functional-20210310083645-5040
	minikube-local-cache-test:functional-20210310191609-6496
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 19:53:24.197031    3088 docker.go:360] Images already preloaded, skipping extraction
	I0310 19:53:24.206905    3088 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 19:53:24.306116    3088 command_runner.go:124] > kindest/kindnetd:v20210220-5b7e6d01
	I0310 19:53:24.306116    3088 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.20.2
	I0310 19:53:24.306116    3088 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.20.2
	I0310 19:53:24.306116    3088 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.20.2
	I0310 19:53:24.306116    3088 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.20.2
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210105233232-2512
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106002159-6856
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106011107-6492
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106215525-1984
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107002220-9088
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107190945-8748
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210112045103-7160
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210114204234-6692
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115023213-8464
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115191024-3516
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210119220838-6552
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120022529-1140
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120175851-7432
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120214442-10992
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120231122-7024
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210123004019-5372
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210126212539-5172
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210128021318-232
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210212145109-352
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210213143925-7440
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219145454-9520
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219220622-3920
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210220004129-7452
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210224014800-800
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210225231842-5736
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210301195830-5700
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210303214129-4588
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304002630-1156
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304184021-4052
	I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210306072141-12056
	I0310 19:53:24.307180    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210308233820-5396
	I0310 19:53:24.307180    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210309234032-4944
	I0310 19:53:24.307180    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310083645-5040
	I0310 19:53:24.307180    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310191609-6496
	I0310 19:53:24.307180    3088 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	I0310 19:53:24.307180    3088 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 19:53:24.307180    3088 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	I0310 19:53:24.307180    3088 command_runner.go:124] > k8s.gcr.io/coredns:1.7.0
	I0310 19:53:24.307180    3088 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	I0310 19:53:24.307180    3088 command_runner.go:124] > k8s.gcr.io/pause:3.2
	I0310 19:53:24.320670    3088 docker.go:423] Got preloaded images: -- stdout --
	kindest/kindnetd:v20210220-5b7e6d01
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	minikube-local-cache-test:functional-20210105233232-2512
	minikube-local-cache-test:functional-20210106002159-6856
	minikube-local-cache-test:functional-20210106011107-6492
	minikube-local-cache-test:functional-20210106215525-1984
	minikube-local-cache-test:functional-20210107002220-9088
	minikube-local-cache-test:functional-20210107190945-8748
	minikube-local-cache-test:functional-20210112045103-7160
	minikube-local-cache-test:functional-20210114204234-6692
	minikube-local-cache-test:functional-20210115023213-8464
	minikube-local-cache-test:functional-20210115191024-3516
	minikube-local-cache-test:functional-20210119220838-6552
	minikube-local-cache-test:functional-20210120022529-1140
	minikube-local-cache-test:functional-20210120175851-7432
	minikube-local-cache-test:functional-20210120214442-10992
	minikube-local-cache-test:functional-20210120231122-7024
	minikube-local-cache-test:functional-20210123004019-5372
	minikube-local-cache-test:functional-20210126212539-5172
	minikube-local-cache-test:functional-20210128021318-232
	minikube-local-cache-test:functional-20210212145109-352
	minikube-local-cache-test:functional-20210213143925-7440
	minikube-local-cache-test:functional-20210219145454-9520
	minikube-local-cache-test:functional-20210219220622-3920
	minikube-local-cache-test:functional-20210220004129-7452
	minikube-local-cache-test:functional-20210224014800-800
	minikube-local-cache-test:functional-20210225231842-5736
	minikube-local-cache-test:functional-20210301195830-5700
	minikube-local-cache-test:functional-20210303214129-4588
	minikube-local-cache-test:functional-20210304002630-1156
	minikube-local-cache-test:functional-20210304184021-4052
	minikube-local-cache-test:functional-20210306072141-12056
	minikube-local-cache-test:functional-20210308233820-5396
	minikube-local-cache-test:functional-20210309234032-4944
	minikube-local-cache-test:functional-20210310083645-5040
	minikube-local-cache-test:functional-20210310191609-6496
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 19:53:24.321000    3088 cache_images.go:73] Images are preloaded, skipping loading
	I0310 19:53:24.328851    3088 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 19:53:24.592335    3088 command_runner.go:124] > cgroupfs
	I0310 19:53:24.592709    3088 cni.go:74] Creating CNI manager for ""
	I0310 19:53:24.592709    3088 cni.go:136] 2 nodes found, recommending kindnet
	I0310 19:53:24.592709    3088 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 19:53:24.592709    3088 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.97 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210310194323-6496 NodeName:multinode-20210310194323-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.97 CgroupDriver:cgroupfs ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 19:53:24.593481    3088 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20210310194323-6496"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 19:53:24.593481    3088 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20210310194323-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:multinode-20210310194323-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 19:53:24.607081    3088 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 19:53:24.635205    3088 command_runner.go:124] > kubeadm
	I0310 19:53:24.635205    3088 command_runner.go:124] > kubectl
	I0310 19:53:24.635205    3088 command_runner.go:124] > kubelet
	I0310 19:53:24.635205    3088 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 19:53:24.644674    3088 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 19:53:24.675833    3088 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0310 19:53:24.719667    3088 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 19:53:24.763789    3088 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1864 bytes)
	I0310 19:53:24.828208    3088 ssh_runner.go:149] Run: grep 192.168.49.97	control-plane.minikube.internal$ /etc/hosts
	I0310 19:53:24.845590    3088 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 19:53:24.884784    3088 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496 for IP: 192.168.49.97
	I0310 19:53:24.885244    3088 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 19:53:24.885617    3088 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 19:53:24.886458    3088 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\client.key
	I0310 19:53:24.886630    3088 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\apiserver.key.b6188fac
	I0310 19:53:24.887102    3088 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\proxy-client.key
	I0310 19:53:24.887102    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0310 19:53:24.887323    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0310 19:53:24.887323    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0310 19:53:24.887622    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0310 19:53:24.887802    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0310 19:53:24.887802    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0310 19:53:24.888084    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0310 19:53:24.888277    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0310 19:53:24.889211    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 19:53:24.889683    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.889683    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 19:53:24.890215    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.890215    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 19:53:24.890673    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.890880    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 19:53:24.891590    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.891590    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 19:53:24.892010    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.892010    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 19:53:24.892320    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.892320    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 19:53:24.892900    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.893182    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 19:53:24.893438    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.893728    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 19:53:24.893972    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.894276    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 19:53:24.895205    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.895205    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 19:53:24.895564    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.895896    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 19:53:24.896210    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.896210    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 19:53:24.896749    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.896749    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 19:53:24.897306    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.897306    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 19:53:24.897810    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.897810    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 19:53:24.898278    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.898520    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 19:53:24.898759    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.898995    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 19:53:24.898995    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.899652    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 19:53:24.899955    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.899955    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 19:53:24.900592    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.900770    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 19:53:24.900770    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.900770    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 19:53:24.900770    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.901741    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 19:53:24.901741    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.901741    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 19:53:24.901741    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.901741    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 19:53:24.902854    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.902854    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 19:53:24.903476    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.903713    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 19:53:24.903713    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.903713    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 19:53:24.903713    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.904652    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 19:53:24.904652    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.904652    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 19:53:24.904652    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.905626    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 19:53:24.905626    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.905626    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 19:53:24.905626    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.906633    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 19:53:24.906633    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.906633    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 19:53:24.906633    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.907627    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 19:53:24.907627    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.907627    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 19:53:24.907627    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.907627    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 19:53:24.908632    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.908632    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 19:53:24.908632    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.909629    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 19:53:24.909629    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 19:53:24.909629    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 19:53:24.909629    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 19:53:24.910628    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 19:53:24.910628    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 19:53:24.910628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\800.pem -> /usr/share/ca-certificates/800.pem
	I0310 19:53:24.910628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1728.pem -> /usr/share/ca-certificates/1728.pem
	I0310 19:53:24.910628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5396.pem -> /usr/share/ca-certificates/5396.pem
	I0310 19:53:24.911636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5700.pem -> /usr/share/ca-certificates/5700.pem
	I0310 19:53:24.911636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1156.pem -> /usr/share/ca-certificates/1156.pem
	I0310 19:53:24.911636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1984.pem -> /usr/share/ca-certificates/1984.pem
	I0310 19:53:24.911636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6496.pem -> /usr/share/ca-certificates/6496.pem
	I0310 19:53:24.911636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3056.pem -> /usr/share/ca-certificates/3056.pem
	I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3920.pem -> /usr/share/ca-certificates/3920.pem
	I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7432.pem -> /usr/share/ca-certificates/7432.pem
	I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1476.pem -> /usr/share/ca-certificates/1476.pem
	I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4588.pem -> /usr/share/ca-certificates/4588.pem
	I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8748.pem -> /usr/share/ca-certificates/8748.pem
	I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7024.pem -> /usr/share/ca-certificates/7024.pem
	I0310 19:53:24.913646    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9520.pem -> /usr/share/ca-certificates/9520.pem
	I0310 19:53:24.913646    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\232.pem -> /usr/share/ca-certificates/232.pem
	I0310 19:53:24.913646    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5736.pem -> /usr/share/ca-certificates/5736.pem
	I0310 19:53:24.913646    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6368.pem -> /usr/share/ca-certificates/6368.pem
	I0310 19:53:24.914672    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6492.pem -> /usr/share/ca-certificates/6492.pem
	I0310 19:53:24.914672    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4944.pem -> /usr/share/ca-certificates/4944.pem
	I0310 19:53:24.914672    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\10992.pem -> /usr/share/ca-certificates/10992.pem
	I0310 19:53:24.914672    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5040.pem -> /usr/share/ca-certificates/5040.pem
	I0310 19:53:24.914672    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6692.pem -> /usr/share/ca-certificates/6692.pem
	I0310 19:53:24.915628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0310 19:53:24.915628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6856.pem -> /usr/share/ca-certificates/6856.pem
	I0310 19:53:24.915628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7452.pem -> /usr/share/ca-certificates/7452.pem
	I0310 19:53:24.915628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9088.pem -> /usr/share/ca-certificates/9088.pem
	I0310 19:53:24.915628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3516.pem -> /usr/share/ca-certificates/3516.pem
	I0310 19:53:24.916660    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5172.pem -> /usr/share/ca-certificates/5172.pem
	I0310 19:53:24.916660    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5372.pem -> /usr/share/ca-certificates/5372.pem
	I0310 19:53:24.916660    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4452.pem -> /usr/share/ca-certificates/4452.pem
	I0310 19:53:24.916660    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7160.pem -> /usr/share/ca-certificates/7160.pem
	I0310 19:53:24.916660    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1140.pem -> /usr/share/ca-certificates/1140.pem
	I0310 19:53:24.917627    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6552.pem -> /usr/share/ca-certificates/6552.pem
	I0310 19:53:24.917627    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\2512.pem -> /usr/share/ca-certificates/2512.pem
	I0310 19:53:24.917627    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4052.pem -> /usr/share/ca-certificates/4052.pem
	I0310 19:53:24.917627    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8464.pem -> /usr/share/ca-certificates/8464.pem
	I0310 19:53:24.917627    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\12056.pem -> /usr/share/ca-certificates/12056.pem
	I0310 19:53:24.918675    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\352.pem -> /usr/share/ca-certificates/352.pem
	I0310 19:53:24.918675    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7440.pem -> /usr/share/ca-certificates/7440.pem
	I0310 19:53:24.920679    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 19:53:24.976252    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0310 19:53:25.030714    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 19:53:25.090889    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 19:53:25.143662    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 19:53:25.195719    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 19:53:25.249996    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 19:53:25.306139    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 19:53:25.366256    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 19:53:25.417091    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 19:53:25.472798    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 19:53:25.527037    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 19:53:25.586023    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 19:53:25.642497    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 19:53:25.700669    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 19:53:25.756239    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 19:53:25.812767    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 19:53:25.866100    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 19:53:25.927018    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 19:53:25.987340    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 19:53:26.044823    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 19:53:26.103079    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 19:53:26.155668    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 19:53:26.205110    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 19:53:26.259414    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 19:53:26.315043    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 19:53:26.371061    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 19:53:26.427542    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 19:53:26.483918    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 19:53:26.537341    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 19:53:26.592746    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 19:53:26.649226    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 19:53:26.709111    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 19:53:26.764856    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 19:53:26.816753    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 19:53:26.870001    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 19:53:26.926996    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 19:53:26.985545    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 19:53:27.042516    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 19:53:27.108500    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 19:53:27.172842    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 19:53:27.232563    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 19:53:27.290692    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 19:53:27.351846    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 19:53:27.413092    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 19:53:27.474329    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 19:53:27.534146    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 19:53:27.592076    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 19:53:27.647978    3088 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 19:53:27.702687    3088 ssh_runner.go:149] Run: openssl version
	I0310 19:53:27.724265    3088 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0310 19:53:27.737037    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 19:53:27.775300    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 19:53:27.792722    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 19:53:27.793198    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 19:53:27.803046    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 19:53:27.826379    3088 command_runner.go:124] > 51391683
	I0310 19:53:27.839006    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:27.879137    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 19:53:27.920299    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 19:53:27.938016    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 19:53:27.939294    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 19:53:27.954163    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 19:53:27.978079    3088 command_runner.go:124] > 51391683
	I0310 19:53:27.995374    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:28.038398    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 19:53:28.080310    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 19:53:28.100265    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 19:53:28.100745    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 19:53:28.110958    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 19:53:28.133399    3088 command_runner.go:124] > 51391683
	I0310 19:53:28.145214    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:28.188647    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 19:53:28.230413    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 19:53:28.249994    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 19:53:28.250331    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 19:53:28.265754    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 19:53:28.285975    3088 command_runner.go:124] > 51391683
	I0310 19:53:28.299397    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:28.337293    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 19:53:28.377081    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 19:53:28.394515    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 19:53:28.395460    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 19:53:28.405024    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 19:53:28.429848    3088 command_runner.go:124] > 51391683
	I0310 19:53:28.448803    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:28.492300    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 19:53:28.533891    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 19:53:28.551985    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 19:53:28.552260    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 19:53:28.563528    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 19:53:28.587322    3088 command_runner.go:124] > 51391683
	I0310 19:53:28.599691    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:28.642472    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 19:53:28.681519    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 19:53:28.701854    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 19:53:28.701954    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 19:53:28.716624    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 19:53:28.736743    3088 command_runner.go:124] > 51391683
	I0310 19:53:28.752646    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:28.792525    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 19:53:28.832737    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 19:53:28.849614    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 19:53:28.850243    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 19:53:28.859938    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 19:53:28.879953    3088 command_runner.go:124] > 51391683
	I0310 19:53:28.891584    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:28.931186    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 19:53:28.970613    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 19:53:28.988486    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 19:53:28.988486    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 19:53:28.999135    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 19:53:29.021219    3088 command_runner.go:124] > 51391683
	I0310 19:53:29.034127    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:29.073296    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 19:53:29.117489    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 19:53:29.134913    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 19:53:29.134913    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 19:53:29.146465    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 19:53:29.166753    3088 command_runner.go:124] > 51391683
	I0310 19:53:29.182352    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:29.221550    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 19:53:29.262496    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 19:53:29.283101    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 19:53:29.283253    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 19:53:29.297286    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 19:53:29.321290    3088 command_runner.go:124] > 51391683
	I0310 19:53:29.332060    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:29.372815    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 19:53:29.409729    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 19:53:29.426375    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 19:53:29.426561    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 19:53:29.440369    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 19:53:29.463034    3088 command_runner.go:124] > 51391683
	I0310 19:53:29.477954    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:29.521995    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 19:53:29.563528    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 19:53:29.581550    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 19:53:29.581981    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 19:53:29.593345    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 19:53:29.614782    3088 command_runner.go:124] > 51391683
	I0310 19:53:29.626663    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:29.672964    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 19:53:29.712169    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 19:53:29.729107    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 19:53:29.729413    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 19:53:29.742828    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 19:53:29.764390    3088 command_runner.go:124] > 51391683
	I0310 19:53:29.775816    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:29.818256    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 19:53:29.860546    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 19:53:29.878685    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 19:53:29.879128    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 19:53:29.900015    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 19:53:29.921583    3088 command_runner.go:124] > 51391683
	I0310 19:53:29.935021    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:29.977981    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 19:53:30.022289    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 19:53:30.039893    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 19:53:30.039893    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 19:53:30.051911    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 19:53:30.078704    3088 command_runner.go:124] > 51391683
	I0310 19:53:30.090715    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:30.132886    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 19:53:30.171601    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 19:53:30.192359    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 19:53:30.192558    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 19:53:30.203565    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 19:53:30.225881    3088 command_runner.go:124] > 51391683
	I0310 19:53:30.237966    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:30.278673    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 19:53:30.330328    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 19:53:30.349395    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 19:53:30.349597    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 19:53:30.361108    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 19:53:30.386701    3088 command_runner.go:124] > 51391683
	I0310 19:53:30.400909    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:30.436123    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 19:53:30.481181    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 19:53:30.500027    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 19:53:30.500027    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 19:53:30.512282    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 19:53:30.537141    3088 command_runner.go:124] > 51391683
	I0310 19:53:30.550185    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:30.590843    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 19:53:30.632637    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 19:53:30.652767    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 19:53:30.652986    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 19:53:30.664846    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 19:53:30.685945    3088 command_runner.go:124] > 51391683
	I0310 19:53:30.700110    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:30.739099    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 19:53:30.776911    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 19:53:30.794710    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 19:53:30.798295    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 19:53:30.808193    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 19:53:30.832099    3088 command_runner.go:124] > 51391683
	I0310 19:53:30.842603    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:30.884503    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 19:53:30.929077    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 19:53:30.947901    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 19:53:30.948806    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 19:53:30.959057    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 19:53:30.980853    3088 command_runner.go:124] > 51391683
	I0310 19:53:30.990965    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:31.039826    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 19:53:31.083153    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 19:53:31.098633    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 19:53:31.098633    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 19:53:31.116102    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 19:53:31.138147    3088 command_runner.go:124] > 51391683
	I0310 19:53:31.148057    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:31.189149    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 19:53:31.230444    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 19:53:31.245000    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 19:53:31.245147    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 19:53:31.255624    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 19:53:31.277206    3088 command_runner.go:124] > 51391683
	I0310 19:53:31.291386    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:31.332692    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 19:53:31.380496    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 19:53:31.397597    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 19:53:31.398593    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 19:53:31.418085    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 19:53:31.440027    3088 command_runner.go:124] > 51391683
	I0310 19:53:31.450352    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:31.490940    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 19:53:31.528872    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 19:53:31.547184    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 19:53:31.547184    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 19:53:31.564382    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 19:53:31.588650    3088 command_runner.go:124] > 51391683
	I0310 19:53:31.598825    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:31.648424    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 19:53:31.688369    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 19:53:31.705500    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 19:53:31.706605    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 19:53:31.716496    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 19:53:31.741813    3088 command_runner.go:124] > b5213941
	I0310 19:53:31.762700    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 19:53:31.800019    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 19:53:31.840104    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 19:53:31.859057    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 19:53:31.859380    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 19:53:31.871926    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 19:53:31.896319    3088 command_runner.go:124] > 51391683
	I0310 19:53:31.910897    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:31.954231    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 19:53:31.995551    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 19:53:32.016526    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 19:53:32.016735    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 19:53:32.029057    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 19:53:32.054770    3088 command_runner.go:124] > 51391683
	I0310 19:53:32.067551    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:32.108098    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 19:53:32.148537    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 19:53:32.166565    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 19:53:32.166783    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 19:53:32.180000    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 19:53:32.202821    3088 command_runner.go:124] > 51391683
	I0310 19:53:32.216716    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:32.257283    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 19:53:32.309864    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 19:53:32.328224    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 19:53:32.328673    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 19:53:32.340517    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 19:53:32.362812    3088 command_runner.go:124] > 51391683
	I0310 19:53:32.380796    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:32.429333    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 19:53:32.471202    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 19:53:32.491168    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 19:53:32.491393    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 19:53:32.502899    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 19:53:32.525993    3088 command_runner.go:124] > 51391683
	I0310 19:53:32.538056    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:32.583216    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 19:53:32.626873    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 19:53:32.647539    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 19:53:32.647694    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 19:53:32.659324    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 19:53:32.685902    3088 command_runner.go:124] > 51391683
	I0310 19:53:32.697690    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:32.738938    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 19:53:32.782895    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 19:53:32.804782    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 19:53:32.804782    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 19:53:32.822137    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 19:53:32.843286    3088 command_runner.go:124] > 51391683
	I0310 19:53:32.864680    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:32.909814    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 19:53:32.952428    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 19:53:32.973526    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 19:53:32.976658    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 19:53:32.989054    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 19:53:33.013611    3088 command_runner.go:124] > 51391683
	I0310 19:53:33.029170    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:33.072306    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 19:53:33.117752    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 19:53:33.135066    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 19:53:33.135234    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 19:53:33.147118    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 19:53:33.166299    3088 command_runner.go:124] > 51391683
	I0310 19:53:33.180107    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:33.221510    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 19:53:33.258889    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 19:53:33.277265    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 19:53:33.278091    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 19:53:33.288024    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 19:53:33.307036    3088 command_runner.go:124] > 51391683
	I0310 19:53:33.318675    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:33.359655    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 19:53:33.401496    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 19:53:33.419050    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 19:53:33.419485    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 19:53:33.429816    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 19:53:33.454397    3088 command_runner.go:124] > 51391683
	I0310 19:53:33.465565    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:33.500264    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 19:53:33.550915    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 19:53:33.567718    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 19:53:33.567718    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 19:53:33.580121    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 19:53:33.604521    3088 command_runner.go:124] > 51391683
	I0310 19:53:33.615589    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:33.654330    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 19:53:33.692439    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 19:53:33.713393    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 19:53:33.714279    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 19:53:33.727104    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 19:53:33.747826    3088 command_runner.go:124] > 51391683
	I0310 19:53:33.759389    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 19:53:33.786157    3088 kubeadm.go:385] StartCluster: {Name:multinode-20210310194323-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-20210310194323-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	I0310 19:53:33.795461    3088 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 19:53:33.916648    3088 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 19:53:33.943830    3088 command_runner.go:124] > /var/lib/kubelet/config.yaml
	I0310 19:53:33.944467    3088 command_runner.go:124] > /var/lib/kubelet/kubeadm-flags.env
	I0310 19:53:33.944467    3088 command_runner.go:124] > /var/lib/minikube/etcd:
	I0310 19:53:33.944467    3088 command_runner.go:124] > member
	I0310 19:53:33.945801    3088 kubeadm.go:396] found existing configuration files, will attempt cluster restart
	I0310 19:53:33.945801    3088 kubeadm.go:594] restartCluster start
	I0310 19:53:33.956289    3088 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0310 19:53:33.981176    3088 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0310 19:53:33.996052    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:53:34.494307    3088 kubeconfig.go:117] verify returned: extract IP: "multinode-20210310194323-6496" does not appear in C:\Users\jenkins/.kube/config
	I0310 19:53:34.494999    3088 kubeconfig.go:128] "multinode-20210310194323-6496" context is missing from C:\Users\jenkins/.kube/config - will repair!
	I0310 19:53:34.495869    3088 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 19:53:34.511090    3088 kapi.go:59] client config for multinode-20210310194323-6496: &rest.Config{Host:"https://127.0.0.1:55051", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496/client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496/client.key", CAFile:"C:\\Users\\jenkins\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompr
ession:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 19:53:34.537023    3088 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0310 19:53:34.565220    3088 api_server.go:146] Checking apiserver status ...
	I0310 19:53:34.581022    3088 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0310 19:53:34.620120    3088 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0310 19:53:34.620120    3088 kubeadm.go:573] needs reconfigure: apiserver in state Stopped
	I0310 19:53:34.620120    3088 kubeadm.go:1042] stopping kube-system containers ...
	I0310 19:53:34.629216    3088 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 19:53:34.739369    3088 command_runner.go:124] > 7ce422d22c30
	I0310 19:53:34.739369    3088 command_runner.go:124] > cf643b2bb13a
	I0310 19:53:34.739369    3088 command_runner.go:124] > 8e25ad7ac878
	I0310 19:53:34.739369    3088 command_runner.go:124] > b6d8885bf62c
	I0310 19:53:34.739369    3088 command_runner.go:124] > 5dd905038955
	I0310 19:53:34.739369    3088 command_runner.go:124] > b285d1fca513
	I0310 19:53:34.739369    3088 command_runner.go:124] > af5652213b63
	I0310 19:53:34.739369    3088 command_runner.go:124] > 696c45777987
	I0310 19:53:34.739369    3088 command_runner.go:124] > c9e9e409c8d1
	I0310 19:53:34.739552    3088 command_runner.go:124] > 90d50f811eb4
	I0310 19:53:34.739552    3088 command_runner.go:124] > 11af52e50d91
	I0310 19:53:34.739552    3088 command_runner.go:124] > 5e3898b62288
	I0310 19:53:34.739552    3088 command_runner.go:124] > fcdd27401671
	I0310 19:53:34.739552    3088 command_runner.go:124] > b19167915dae
	I0310 19:53:34.739552    3088 command_runner.go:124] > 904c19c6b486
	I0310 19:53:34.739552    3088 command_runner.go:124] > 62c13f20a591
	I0310 19:53:34.739552    3088 docker.go:261] Stopping containers: [7ce422d22c30 cf643b2bb13a 8e25ad7ac878 b6d8885bf62c 5dd905038955 b285d1fca513 af5652213b63 696c45777987 c9e9e409c8d1 90d50f811eb4 11af52e50d91 5e3898b62288 fcdd27401671 b19167915dae 904c19c6b486 62c13f20a591]
	I0310 19:53:34.749078    3088 ssh_runner.go:149] Run: docker stop 7ce422d22c30 cf643b2bb13a 8e25ad7ac878 b6d8885bf62c 5dd905038955 b285d1fca513 af5652213b63 696c45777987 c9e9e409c8d1 90d50f811eb4 11af52e50d91 5e3898b62288 fcdd27401671 b19167915dae 904c19c6b486 62c13f20a591
	I0310 19:53:34.854281    3088 command_runner.go:124] > 7ce422d22c30
	I0310 19:53:34.855189    3088 command_runner.go:124] > cf643b2bb13a
	I0310 19:53:34.855189    3088 command_runner.go:124] > 8e25ad7ac878
	I0310 19:53:34.858489    3088 command_runner.go:124] > b6d8885bf62c
	I0310 19:53:34.858489    3088 command_runner.go:124] > 5dd905038955
	I0310 19:53:34.858489    3088 command_runner.go:124] > b285d1fca513
	I0310 19:53:34.858489    3088 command_runner.go:124] > af5652213b63
	I0310 19:53:34.858489    3088 command_runner.go:124] > 696c45777987
	I0310 19:53:34.858489    3088 command_runner.go:124] > c9e9e409c8d1
	I0310 19:53:34.858489    3088 command_runner.go:124] > 90d50f811eb4
	I0310 19:53:34.858489    3088 command_runner.go:124] > 11af52e50d91
	I0310 19:53:34.858489    3088 command_runner.go:124] > 5e3898b62288
	I0310 19:53:34.858489    3088 command_runner.go:124] > fcdd27401671
	I0310 19:53:34.858489    3088 command_runner.go:124] > b19167915dae
	I0310 19:53:34.859180    3088 command_runner.go:124] > 904c19c6b486
	I0310 19:53:34.859180    3088 command_runner.go:124] > 62c13f20a591
	I0310 19:53:34.881991    3088 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0310 19:53:34.929292    3088 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 19:53:34.957088    3088 command_runner.go:124] > -rw------- 1 root root 5611 Mar 10 19:45 /etc/kubernetes/admin.conf
	I0310 19:53:34.957088    3088 command_runner.go:124] > -rw------- 1 root root 5629 Mar 10 19:45 /etc/kubernetes/controller-manager.conf
	I0310 19:53:34.957088    3088 command_runner.go:124] > -rw------- 1 root root 2055 Mar 10 19:45 /etc/kubernetes/kubelet.conf
	I0310 19:53:34.957088    3088 command_runner.go:124] > -rw------- 1 root root 5581 Mar 10 19:45 /etc/kubernetes/scheduler.conf
	I0310 19:53:34.957088    3088 kubeadm.go:153] found existing configuration files:
	-rw------- 1 root root 5611 Mar 10 19:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5629 Mar 10 19:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2055 Mar 10 19:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5581 Mar 10 19:45 /etc/kubernetes/scheduler.conf
	
	I0310 19:53:34.970647    3088 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0310 19:53:34.995392    3088 command_runner.go:124] >     server: https://control-plane.minikube.internal:8443
	I0310 19:53:35.007180    3088 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0310 19:53:35.036298    3088 command_runner.go:124] >     server: https://control-plane.minikube.internal:8443
	I0310 19:53:35.046407    3088 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0310 19:53:35.074094    3088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0310 19:53:35.091021    3088 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0310 19:53:35.133408    3088 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0310 19:53:35.162089    3088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0310 19:53:35.179390    3088 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0310 19:53:35.222728    3088 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 19:53:35.245097    3088 kubeadm.go:670] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0310 19:53:35.246321    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 19:53:35.710539    3088 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0310 19:53:35.710539    3088 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0310 19:53:35.710539    3088 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0310 19:53:35.710539    3088 command_runner.go:124] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0310 19:53:35.710539    3088 command_runner.go:124] > [certs] Using existing front-proxy-ca certificate authority
	I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing front-proxy-client certificate and key on disk
	I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing etcd/ca certificate authority
	I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing etcd/server certificate and key on disk
	I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing etcd/peer certificate and key on disk
	I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using the existing "sa" key
	I0310 19:53:35.711067    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 19:53:36.073538    3088 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0310 19:53:36.687590    3088 command_runner.go:124] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0310 19:53:37.077224    3088 command_runner.go:124] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0310 19:53:37.602733    3088 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0310 19:53:37.856428    3088 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0310 19:53:37.867781    3088 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.1567171s)
	I0310 19:53:37.868038    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0310 19:53:38.434467    3088 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0310 19:53:38.435215    3088 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0310 19:53:38.435215    3088 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0310 19:53:38.435215    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 19:53:39.073688    3088 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0310 19:53:39.074415    3088 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0310 19:53:39.134602    3088 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0310 19:53:39.140110    3088 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0310 19:53:39.155561    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0310 19:53:39.846464    3088 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0310 19:53:39.871172    3088 kubeadm.go:687] waiting for restarted kubelet to initialise ...
	I0310 19:53:39.881279    3088 retry.go:31] will retry after 276.165072ms: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	I0310 19:53:40.164207    3088 retry.go:31] will retry after 540.190908ms: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	I0310 19:53:40.712965    3088 retry.go:31] will retry after 655.06503ms: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	I0310 19:53:41.375995    3088 retry.go:31] will retry after 791.196345ms: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	I0310 19:53:42.178996    3088 retry.go:31] will retry after 1.170244332s: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	I0310 19:53:43.365071    3088 retry.go:31] will retry after 2.253109428s: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	I0310 19:53:45.627121    3088 retry.go:31] will retry after 1.610739793s: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	I0310 19:53:57.369700    3088 retry.go:31] will retry after 2.804311738s: kubelet not initialised
	I0310 19:54:00.199422    3088 retry.go:31] will retry after 3.824918958s: kubelet not initialised
	I0310 19:54:04.060727    3088 retry.go:31] will retry after 7.69743562s: kubelet not initialised
	I0310 19:54:11.798821    3088 retry.go:31] will retry after 14.635568968s: kubelet not initialised
	I0310 19:54:26.478511    3088 retry.go:31] will retry after 28.406662371s: kubelet not initialised
	I0310 19:54:54.917365    3088 kubeadm.go:704] kubelet initialised
	I0310 19:54:54.917658    3088 kubeadm.go:705] duration metric: took 1m15.046304s waiting for restarted kubelet to initialise ...
	I0310 19:54:54.917658    3088 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	I0310 19:54:54.917658    3088 pod_ready.go:59] waiting 4m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	I0310 19:54:54.949377    3088 pod_ready.go:97] pod "coredns-74ff55c5b-jq4n9" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:13 +0000 GMT Reason: Message:}
	I0310 19:54:54.949377    3088 pod_ready.go:62] duration metric: took 31.7186ms to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	I0310 19:54:54.949377    3088 pod_ready.go:59] waiting 4m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	I0310 19:54:54.986628    3088 pod_ready.go:97] pod "etcd-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:07 +0000 GMT Reason: Message:}
	I0310 19:54:54.986628    3088 pod_ready.go:62] duration metric: took 37.2508ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	I0310 19:54:54.986628    3088 pod_ready.go:59] waiting 4m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	I0310 19:54:55.015085    3088 pod_ready.go:97] pod "kube-apiserver-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:08 +0000 GMT Reason: Message:}
	I0310 19:54:55.015085    3088 pod_ready.go:62] duration metric: took 28.4576ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	I0310 19:54:55.015085    3088 pod_ready.go:59] waiting 4m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	I0310 19:54:55.046031    3088 pod_ready.go:97] pod "kube-controller-manager-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:07 +0000 GMT Reason: Message:}
	I0310 19:54:55.046031    3088 pod_ready.go:62] duration metric: took 30.9458ms to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	I0310 19:54:55.046031    3088 pod_ready.go:59] waiting 4m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	I0310 19:54:55.074027    3088 pod_ready.go:97] pod "kube-proxy-7rchm" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:12 +0000 GMT Reason: Message:}
	I0310 19:54:55.074027    3088 pod_ready.go:62] duration metric: took 27.9962ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	I0310 19:54:55.074027    3088 pod_ready.go:59] waiting 4m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	I0310 19:54:55.104946    3088 pod_ready.go:97] pod "kube-scheduler-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:47:09 +0000 GMT Reason: Message:}
	I0310 19:54:55.104946    3088 pod_ready.go:62] duration metric: took 30.919ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	I0310 19:54:55.104946    3088 pod_ready.go:39] duration metric: took 187.288ms for extra waiting for kube-system core pods to be Ready ...
	I0310 19:54:55.104946    3088 api_server.go:48] waiting for apiserver process to appear ...
	I0310 19:54:55.120112    3088 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 19:54:55.190014    3088 command_runner.go:124] > 2555
	I0310 19:54:55.190278    3088 api_server.go:68] duration metric: took 85.3324ms to wait for apiserver process to appear ...
	I0310 19:54:55.190278    3088 api_server.go:84] waiting for apiserver healthz status ...
	I0310 19:54:55.190278    3088 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55051/healthz ...
	I0310 19:54:55.218950    3088 api_server.go:241] https://127.0.0.1:55051/healthz returned 200:
	ok
	I0310 19:54:55.225610    3088 api_server.go:137] control plane version: v1.20.2
	I0310 19:54:55.225787    3088 api_server.go:127] duration metric: took 35.5091ms to wait for apiserver health ...
	I0310 19:54:55.226476    3088 cni.go:74] Creating CNI manager for ""
	I0310 19:54:55.226760    3088 cni.go:136] 2 nodes found, recommending kindnet
	I0310 19:54:55.230581    3088 out.go:129] * Configuring CNI (Container Networking Interface) ...
	I0310 19:54:55.242870    3088 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0310 19:54:55.265448    3088 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0310 19:54:55.265448    3088 command_runner.go:124] >   Size: 2738488   	Blocks: 5352       IO Block: 4096   regular file
	I0310 19:54:55.265448    3088 command_runner.go:124] > Device: 72h/114d	Inode: 527034      Links: 1
	I0310 19:54:55.265448    3088 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0310 19:54:55.265448    3088 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0310 19:54:55.265448    3088 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0310 19:54:55.265448    3088 command_runner.go:124] > Change: 2021-03-01 19:44:53.130616000 +0000
	I0310 19:54:55.265448    3088 command_runner.go:124] >  Birth: -
	I0310 19:54:55.265448    3088 cni.go:160] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
	I0310 19:54:55.265868    3088 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2298 bytes)
	I0310 19:54:55.325572    3088 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0310 19:54:56.005566    3088 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0310 19:54:56.005865    3088 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0310 19:54:56.005865    3088 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0310 19:54:56.005865    3088 command_runner.go:124] > daemonset.apps/kindnet configured
	I0310 19:54:56.006149    3088 system_pods.go:41] waiting for kube-system pods to appear ...
	I0310 19:54:56.035927    3088 system_pods.go:57] 12 kube-system pods found
	I0310 19:54:56.036074    3088 system_pods.go:59] "coredns-74ff55c5b-jq4n9" [59fcc5d5-1d12-409a-88d8-46674adeb0e7] Running
	I0310 19:54:56.036074    3088 system_pods.go:59] "etcd-multinode-20210310194323-6496" [7355a92f-158f-4d8e-888d-9fe97a766922] Running
	I0310 19:54:56.036223    3088 system_pods.go:59] "kindnet-pdlkw" [bdcc23df-7069-4a7a-8cdc-89b12e006bf6] Running
	I0310 19:54:56.036223    3088 system_pods.go:59] "kindnet-vvk6s" [dba33385-2929-47cf-a14a-869967740392] Running
	I0310 19:54:56.036223    3088 system_pods.go:59] "kindnet-xn5hd" [41dfeb11-7af6-449b-999c-04fb65d2ba9d] Running
	I0310 19:54:56.036223    3088 system_pods.go:59] "kube-apiserver-multinode-20210310194323-6496" [9c82174a-7835-4268-832f-b5d33ee4ed77] Running
	I0310 19:54:56.036223    3088 system_pods.go:59] "kube-controller-manager-multinode-20210310194323-6496" [052eef6a-337b-4476-9681-5695f0e3ee90] Running
	I0310 19:54:56.036223    3088 system_pods.go:59] "kube-proxy-7rchm" [6247bab9-80ef-438a-806a-0c19ed9c39a2] Running
	I0310 19:54:56.036223    3088 system_pods.go:59] "kube-proxy-gjbjj" [af273b96-644c-4e71-82d0-b375b373a1df] Running
	I0310 19:54:56.036223    3088 system_pods.go:59] "kube-proxy-tdzlb" [d613357b-ba23-4106-8b5e-a32483597686] Running
	I0310 19:54:56.036223    3088 system_pods.go:59] "kube-scheduler-multinode-20210310194323-6496" [adc66c6d-e5b0-4c6b-b548-febdfb7a55fb] Running
	I0310 19:54:56.036223    3088 system_pods.go:59] "storage-provisioner" [75d9e0a4-c70e-445c-af14-4db9ef305719] Running
	I0310 19:54:56.036223    3088 system_pods.go:72] duration metric: took 30.0741ms to wait for pod list to return data ...
	I0310 19:54:56.036366    3088 node_conditions.go:101] verifying NodePressure condition ...
	I0310 19:54:56.048570    3088 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	I0310 19:54:56.048570    3088 node_conditions.go:122] node cpu capacity is 4
	I0310 19:54:56.048570    3088 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	I0310 19:54:56.048570    3088 node_conditions.go:122] node cpu capacity is 4
	I0310 19:54:56.048570    3088 node_conditions.go:104] duration metric: took 12.204ms to run NodePressure ...
	I0310 19:54:56.048570    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 19:54:57.137293    3088 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0310 19:54:57.137293    3088 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0310 19:54:57.137293    3088 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.0887239s)
	I0310 19:54:57.137293    3088 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0310 19:54:57.193600    3088 command_runner.go:124] > -16
	I0310 19:54:57.193600    3088 ops.go:34] apiserver oom_adj: -16
	I0310 19:54:57.193600    3088 kubeadm.go:598] restartCluster took 1m23.2479221s
	I0310 19:54:57.193600    3088 kubeadm.go:387] StartCluster complete in 1m23.4075661s
	I0310 19:54:57.194410    3088 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 19:54:57.194410    3088 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	I0310 19:54:57.196797    3088 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 19:54:57.223839    3088 kapi.go:59] client config for multinode-20210310194323-6496: &rest.Config{Host:"https://127.0.0.1:55051", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCo
mpression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 19:54:57.250899    3088 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "multinode-20210310194323-6496" rescaled to 1
	I0310 19:54:57.251266    3088 start.go:203] Will wait 6m0s for node up to 
	I0310 19:54:57.251266    3088 addons.go:381] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0310 19:54:57.251266    3088 addons.go:58] Setting storage-provisioner=true in profile "multinode-20210310194323-6496"
	I0310 19:54:57.251266    3088 addons.go:134] Setting addon storage-provisioner=true in "multinode-20210310194323-6496"
	I0310 19:54:57.251266    3088 addons.go:58] Setting default-storageclass=true in profile "multinode-20210310194323-6496"
	W0310 19:54:57.251266    3088 addons.go:143] addon storage-provisioner should already be in state true
	I0310 19:54:57.251695    3088 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210310194323-6496"
	I0310 19:54:57.259893    3088 out.go:129] * Verifying Kubernetes components...
	I0310 19:54:57.254150    3088 host.go:66] Checking if "multinode-20210310194323-6496" exists ...
	I0310 19:54:57.254587    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 19:54:57.254587    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 19:54:57.254587    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 19:54:57.254587    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 19:54:57.254587    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 19:54:57.254881    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 19:54:57.254881    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 19:54:57.341091    3088 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 19:54:57.351534    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format={{.State.Status}}
	I0310 19:54:57.353978    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format={{.State.Status}}
	I0310 19:54:57.465919    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:54:58.184147    3088 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.185705    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	I0310 19:54:58.185892    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 897.8442ms
	I0310 19:54:58.185892    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	I0310 19:54:58.209804    3088 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.210658    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	I0310 19:54:58.211539    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 950.7734ms
	I0310 19:54:58.211539    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	I0310 19:54:58.211945    3088 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.213152    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	I0310 19:54:58.213613    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 952.847ms
	I0310 19:54:58.214073    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	I0310 19:54:58.263016    3088 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.266775    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	I0310 19:54:58.267565    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.0059309s
	I0310 19:54:58.267565    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	I0310 19:54:58.274245    3088 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.275359    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	I0310 19:54:58.278236    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 987.6861ms
	I0310 19:54:58.278236    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	I0310 19:54:58.287932    3088 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.288620    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	I0310 19:54:58.289427    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 989.624ms
	I0310 19:54:58.289556    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	I0310 19:54:58.316074    3088 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.317201    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	I0310 19:54:58.317657    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.046645s
	I0310 19:54:58.317657    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	I0310 19:54:58.321621    3088 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.322919    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	I0310 19:54:58.323421    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.0199259s
	I0310 19:54:58.323421    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	I0310 19:54:58.333568    3088 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.334206    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	I0310 19:54:58.334206    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.0510521s
	I0310 19:54:58.334206    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	I0310 19:54:58.378444    3088 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.378898    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	I0310 19:54:58.380483    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.0856281s
	I0310 19:54:58.380888    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	I0310 19:54:58.388250    3088 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.389329    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	I0310 19:54:58.390200    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.1294341s
	I0310 19:54:58.390200    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	I0310 19:54:58.408071    3088 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.408334    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	I0310 19:54:58.408794    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.0902696s
	I0310 19:54:58.408995    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	I0310 19:54:58.434499    3088 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.435399    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	I0310 19:54:58.435600    3088 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.436020    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.1743736s
	I0310 19:54:58.436020    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	I0310 19:54:58.436392    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	I0310 19:54:58.436615    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.1582992s
	I0310 19:54:58.436615    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	I0310 19:54:58.444160    3088 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.444160    3088 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.444335    3088 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.444706    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	I0310 19:54:58.444706    3088 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.444929    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	I0310 19:54:58.444706    3088 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.445146    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	I0310 19:54:58.445423    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.1412791s
	I0310 19:54:58.445423    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	I0310 19:54:58.445423    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	I0310 19:54:58.445618    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.1827136s
	I0310 19:54:58.445618    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	I0310 19:54:58.445928    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	I0310 19:54:58.445928    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.179353s
	I0310 19:54:58.445928    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	I0310 19:54:58.445928    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.1814267s
	I0310 19:54:58.445928    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	I0310 19:54:58.446547    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.1485712s
	I0310 19:54:58.446547    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	I0310 19:54:58.459102    3088 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.459719    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	I0310 19:54:58.460203    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.199717s
	I0310 19:54:58.460203    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	I0310 19:54:58.460203    3088 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.460674    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	I0310 19:54:58.460947    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.1643722s
	I0310 19:54:58.460947    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	I0310 19:54:58.464130    3088 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.464717    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	I0310 19:54:58.464717    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.1705676s
	I0310 19:54:58.465186    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	I0310 19:54:58.522095    3088 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.523533    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	I0310 19:54:58.524089    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.2157861s
	I0310 19:54:58.524089    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	I0310 19:54:58.538440    3088 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.538440    3088 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.538959    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	I0310 19:54:58.539133    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	I0310 19:54:58.539444    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 1.2778104s
	I0310 19:54:58.539444    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	I0310 19:54:58.539444    3088 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.539728    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.2792419s
	I0310 19:54:58.539728    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	I0310 19:54:58.539967    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	I0310 19:54:58.540484    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.2215128s
	I0310 19:54:58.540484    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	I0310 19:54:58.562632    3088 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.563553    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	I0310 19:54:58.563553    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.2508073s
	I0310 19:54:58.563553    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	I0310 19:54:58.564209    3088 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.564705    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	I0310 19:54:58.565767    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.2661886s
	I0310 19:54:58.565767    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	I0310 19:54:58.569885    3088 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.570056    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	I0310 19:54:58.570056    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.2542729s
	I0310 19:54:58.570056    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	I0310 19:54:58.574084    3088 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.574956    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	I0310 19:54:58.574956    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.2541845s
	I0310 19:54:58.574956    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	I0310 19:54:58.575739    3088 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.575999    3088 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.576177    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	I0310 19:54:58.576555    3088 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.576802    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	I0310 19:54:58.576802    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.28451s
	I0310 19:54:58.576802    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	I0310 19:54:58.576802    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.2710939s
	I0310 19:54:58.577031    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	I0310 19:54:58.577031    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	I0310 19:54:58.577893    3088 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 19:54:58.577893    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.2561315s
	I0310 19:54:58.578110    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	I0310 19:54:58.578353    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	I0310 19:54:58.578966    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.2859624s
	I0310 19:54:58.579182    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	I0310 19:54:58.579182    3088 cache.go:73] Successfully saved all images to host disk.
	I0310 19:54:58.609758    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format={{.State.Status}}
	I0310 19:54:58.858754    3088 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496: (1.392837s)
	I0310 19:54:58.859742    3088 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	I0310 19:54:58.860283    3088 pod_ready.go:59] waiting 6m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	I0310 19:54:58.873728    3088 cli_runner.go:168] Completed: docker container inspect multinode-20210310194323-6496 --format={{.State.Status}}: (1.521417s)
	I0310 19:54:58.877139    3088 kapi.go:59] client config for multinode-20210310194323-6496: &rest.Config{Host:"https://127.0.0.1:55051", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCo
mpression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 19:54:58.906110    3088 cli_runner.go:168] Completed: docker container inspect multinode-20210310194323-6496 --format={{.State.Status}}: (1.5517271s)
	I0310 19:54:58.909116    3088 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 19:54:58.909116    3088 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 19:54:58.909116    3088 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0310 19:54:58.917128    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:54:58.930129    3088 pod_ready.go:97] pod "coredns-74ff55c5b-jq4n9" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:13 +0000 GMT Reason: Message:}
	I0310 19:54:58.930129    3088 pod_ready.go:62] duration metric: took 69.8464ms to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	I0310 19:54:58.930129    3088 pod_ready.go:59] waiting 6m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	I0310 19:54:58.966343    3088 addons.go:134] Setting addon default-storageclass=true in "multinode-20210310194323-6496"
	W0310 19:54:58.966343    3088 addons.go:143] addon default-storageclass should already be in state true
	I0310 19:54:58.966978    3088 host.go:66] Checking if "multinode-20210310194323-6496" exists ...
	I0310 19:54:58.986781    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format={{.State.Status}}
	I0310 19:54:58.995788    3088 pod_ready.go:97] pod "etcd-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:07 +0000 GMT Reason: Message:}
	I0310 19:54:58.995788    3088 pod_ready.go:62] duration metric: took 65.6588ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	I0310 19:54:58.995788    3088 pod_ready.go:59] waiting 6m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	I0310 19:54:59.058686    3088 pod_ready.go:97] pod "kube-apiserver-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:08 +0000 GMT Reason: Message:}
	I0310 19:54:59.059029    3088 pod_ready.go:62] duration metric: took 63.241ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	I0310 19:54:59.059029    3088 pod_ready.go:59] waiting 6m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	I0310 19:54:59.111541    3088 pod_ready.go:97] pod "kube-controller-manager-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:07 +0000 GMT Reason: Message:}
	I0310 19:54:59.111985    3088 pod_ready.go:62] duration metric: took 52.956ms to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	I0310 19:54:59.111985    3088 pod_ready.go:59] waiting 6m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	I0310 19:54:59.155983    3088 pod_ready.go:97] pod "kube-proxy-7rchm" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:12 +0000 GMT Reason: Message:}
	I0310 19:54:59.155983    3088 pod_ready.go:62] duration metric: took 43.9981ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	I0310 19:54:59.155983    3088 pod_ready.go:59] waiting 6m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	I0310 19:54:59.196758    3088 pod_ready.go:97] pod "kube-scheduler-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:47:09 +0000 GMT Reason: Message:}
	I0310 19:54:59.196758    3088 pod_ready.go:62] duration metric: took 40.7755ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	I0310 19:54:59.196758    3088 pod_ready.go:39] duration metric: took 336.4758ms for extra waiting for kube-system core pods to be Ready ...
	I0310 19:54:59.196758    3088 api_server.go:48] waiting for apiserver process to appear ...
	I0310 19:54:59.207527    3088 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 19:54:59.243873    3088 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 19:54:59.249880    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:54:59.338927    3088 command_runner.go:124] > 2555
	I0310 19:54:59.338927    3088 api_server.go:68] duration metric: took 2.0876643s to wait for apiserver process to appear ...
	I0310 19:54:59.338927    3088 api_server.go:84] waiting for apiserver healthz status ...
	I0310 19:54:59.338927    3088 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55051/healthz ...
	I0310 19:54:59.404031    3088 api_server.go:241] https://127.0.0.1:55051/healthz returned 200:
	ok
	I0310 19:54:59.417272    3088 api_server.go:137] control plane version: v1.20.2
	I0310 19:54:59.417272    3088 api_server.go:127] duration metric: took 78.3455ms to wait for apiserver health ...
	I0310 19:54:59.417272    3088 system_pods.go:41] waiting for kube-system pods to appear ...
	I0310 19:54:59.461955    3088 system_pods.go:57] 12 kube-system pods found
	I0310 19:54:59.466340    3088 system_pods.go:59] "coredns-74ff55c5b-jq4n9" [59fcc5d5-1d12-409a-88d8-46674adeb0e7] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "etcd-multinode-20210310194323-6496" [7355a92f-158f-4d8e-888d-9fe97a766922] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "kindnet-pdlkw" [bdcc23df-7069-4a7a-8cdc-89b12e006bf6] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "kindnet-vvk6s" [dba33385-2929-47cf-a14a-869967740392] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "kindnet-xn5hd" [41dfeb11-7af6-449b-999c-04fb65d2ba9d] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "kube-apiserver-multinode-20210310194323-6496" [9c82174a-7835-4268-832f-b5d33ee4ed77] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "kube-controller-manager-multinode-20210310194323-6496" [052eef6a-337b-4476-9681-5695f0e3ee90] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "kube-proxy-7rchm" [6247bab9-80ef-438a-806a-0c19ed9c39a2] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "kube-proxy-gjbjj" [af273b96-644c-4e71-82d0-b375b373a1df] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "kube-proxy-tdzlb" [d613357b-ba23-4106-8b5e-a32483597686] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "kube-scheduler-multinode-20210310194323-6496" [adc66c6d-e5b0-4c6b-b548-febdfb7a55fb] Running
	I0310 19:54:59.466469    3088 system_pods.go:59] "storage-provisioner" [75d9e0a4-c70e-445c-af14-4db9ef305719] Running
	I0310 19:54:59.466469    3088 system_pods.go:72] duration metric: took 49.197ms to wait for pod list to return data ...
	I0310 19:54:59.466469    3088 default_sa.go:33] waiting for default service account to be created ...
	I0310 19:54:59.506417    3088 default_sa.go:44] found service account: "default"
	I0310 19:54:59.506417    3088 default_sa.go:54] duration metric: took 39.948ms for default service account to be created ...
	I0310 19:54:59.506704    3088 system_pods.go:114] waiting for k8s-apps to be running ...
	I0310 19:54:59.539551    3088 system_pods.go:84] 12 kube-system pods found
	I0310 19:54:59.539780    3088 system_pods.go:87] "coredns-74ff55c5b-jq4n9" [59fcc5d5-1d12-409a-88d8-46674adeb0e7] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "etcd-multinode-20210310194323-6496" [7355a92f-158f-4d8e-888d-9fe97a766922] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "kindnet-pdlkw" [bdcc23df-7069-4a7a-8cdc-89b12e006bf6] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "kindnet-vvk6s" [dba33385-2929-47cf-a14a-869967740392] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "kindnet-xn5hd" [41dfeb11-7af6-449b-999c-04fb65d2ba9d] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "kube-apiserver-multinode-20210310194323-6496" [9c82174a-7835-4268-832f-b5d33ee4ed77] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "kube-controller-manager-multinode-20210310194323-6496" [052eef6a-337b-4476-9681-5695f0e3ee90] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "kube-proxy-7rchm" [6247bab9-80ef-438a-806a-0c19ed9c39a2] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "kube-proxy-gjbjj" [af273b96-644c-4e71-82d0-b375b373a1df] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "kube-proxy-tdzlb" [d613357b-ba23-4106-8b5e-a32483597686] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "kube-scheduler-multinode-20210310194323-6496" [adc66c6d-e5b0-4c6b-b548-febdfb7a55fb] Running
	I0310 19:54:59.539780    3088 system_pods.go:87] "storage-provisioner" [75d9e0a4-c70e-445c-af14-4db9ef305719] Running
	I0310 19:54:59.539780    3088 system_pods.go:124] duration metric: took 33.0755ms to wait for k8s-apps to be running ...
	I0310 19:54:59.539780    3088 system_svc.go:44] waiting for kubelet service to be running ....
	I0310 19:54:59.560075    3088 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 19:54:59.568653    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	I0310 19:54:59.608964    3088 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	I0310 19:54:59.608964    3088 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0310 19:54:59.620954    3088 system_svc.go:56] duration metric: took 81.1748ms WaitForService to wait for kubelet.
	I0310 19:54:59.620954    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:54:59.620954    3088 node_ready.go:35] waiting 6m0s for node status to be ready ...
	I0310 19:54:59.642742    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:54:59.854178    3088 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 19:54:59.908586    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	I0310 19:55:00.169046    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:00.203404    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	I0310 19:55:00.650896    3088 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0310 19:55:00.660519    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:00.877115    3088 command_runner.go:124] > serviceaccount/storage-provisioner unchanged
	I0310 19:55:00.877299    3088 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0310 19:55:00.877299    3088 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0310 19:55:00.877299    3088 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0310 19:55:00.877299    3088 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath unchanged
	I0310 19:55:00.877299    3088 command_runner.go:124] > pod/storage-provisioner configured
	I0310 19:55:00.877447    3088 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0232704s)
	I0310 19:55:00.877588    3088 command_runner.go:124] > kindest/kindnetd:v20210220-5b7e6d01
	I0310 19:55:00.878129    3088 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.20.2
	I0310 19:55:00.878129    3088 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.20.2
	I0310 19:55:00.878129    3088 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.20.2
	I0310 19:55:00.878129    3088 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.20.2
	I0310 19:55:00.878129    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210105233232-2512
	I0310 19:55:00.878129    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106002159-6856
	I0310 19:55:00.878129    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106011107-6492
	I0310 19:55:00.878297    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106215525-1984
	I0310 19:55:00.878297    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107002220-9088
	I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107190945-8748
	I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210112045103-7160
	I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210114204234-6692
	I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115023213-8464
	I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115191024-3516
	I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210119220838-6552
	I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120022529-1140
	I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120175851-7432
	I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120214442-10992
	I0310 19:55:00.878563    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120231122-7024
	I0310 19:55:00.878563    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210123004019-5372
	I0310 19:55:00.878563    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210126212539-5172
	I0310 19:55:00.878694    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210128021318-232
	I0310 19:55:00.878694    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210212145109-352
	I0310 19:55:00.878694    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210213143925-7440
	I0310 19:55:00.878694    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219145454-9520
	I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219220622-3920
	I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210220004129-7452
	I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210224014800-800
	I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210225231842-5736
	I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210301195830-5700
	I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210303214129-4588
	I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304002630-1156
	I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304184021-4052
	I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210306072141-12056
	I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210308233820-5396
	I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210309234032-4944
	I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310083645-5040
	I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310191609-6496
	I0310 19:55:00.878964    3088 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	I0310 19:55:00.878964    3088 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 19:55:00.879095    3088 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	I0310 19:55:00.879229    3088 command_runner.go:124] > k8s.gcr.io/coredns:1.7.0
	I0310 19:55:00.879229    3088 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	I0310 19:55:00.879229    3088 command_runner.go:124] > k8s.gcr.io/pause:3.2
	I0310 19:55:00.879364    3088 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.6354929s)
	I0310 19:55:00.879524    3088 docker.go:423] Got preloaded images: -- stdout --
	kindest/kindnetd:v20210220-5b7e6d01
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	minikube-local-cache-test:functional-20210105233232-2512
	minikube-local-cache-test:functional-20210106002159-6856
	minikube-local-cache-test:functional-20210106011107-6492
	minikube-local-cache-test:functional-20210106215525-1984
	minikube-local-cache-test:functional-20210107002220-9088
	minikube-local-cache-test:functional-20210107190945-8748
	minikube-local-cache-test:functional-20210112045103-7160
	minikube-local-cache-test:functional-20210114204234-6692
	minikube-local-cache-test:functional-20210115023213-8464
	minikube-local-cache-test:functional-20210115191024-3516
	minikube-local-cache-test:functional-20210119220838-6552
	minikube-local-cache-test:functional-20210120022529-1140
	minikube-local-cache-test:functional-20210120175851-7432
	minikube-local-cache-test:functional-20210120214442-10992
	minikube-local-cache-test:functional-20210120231122-7024
	minikube-local-cache-test:functional-20210123004019-5372
	minikube-local-cache-test:functional-20210126212539-5172
	minikube-local-cache-test:functional-20210128021318-232
	minikube-local-cache-test:functional-20210212145109-352
	minikube-local-cache-test:functional-20210213143925-7440
	minikube-local-cache-test:functional-20210219145454-9520
	minikube-local-cache-test:functional-20210219220622-3920
	minikube-local-cache-test:functional-20210220004129-7452
	minikube-local-cache-test:functional-20210224014800-800
	minikube-local-cache-test:functional-20210225231842-5736
	minikube-local-cache-test:functional-20210301195830-5700
	minikube-local-cache-test:functional-20210303214129-4588
	minikube-local-cache-test:functional-20210304002630-1156
	minikube-local-cache-test:functional-20210304184021-4052
	minikube-local-cache-test:functional-20210306072141-12056
	minikube-local-cache-test:functional-20210308233820-5396
	minikube-local-cache-test:functional-20210309234032-4944
	minikube-local-cache-test:functional-20210310083645-5040
	minikube-local-cache-test:functional-20210310191609-6496
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 19:55:00.879656    3088 cache_images.go:73] Images are preloaded, skipping loading
	I0310 19:55:00.895461    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496-m02 --format={{.State.Status}}
	I0310 19:55:01.187118    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:01.362649    3088 command_runner.go:124] > storageclass.storage.k8s.io/standard unchanged
	I0310 19:55:01.376100    3088 out.go:129] * Enabled addons: storage-provisioner, default-storageclass
	I0310 19:55:01.376796    3088 addons.go:383] enableAddons completed in 4.1255366s
	I0310 19:55:01.446555    3088 cache_images.go:223] succeeded pushing to: multinode-20210310194323-6496
	I0310 19:55:01.446728    3088 cache_images.go:224] failed pushing to: 
	I0310 19:55:01.658956    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:02.157211    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:02.653984    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:03.158920    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:03.657505    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:04.157433    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:04.653087    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:05.157187    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:05.656224    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:06.159413    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:06.655559    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:07.165437    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:07.656295    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:08.155232    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:08.656917    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:09.161488    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:09.657957    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:10.156400    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:10.657853    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:11.156964    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:11.656354    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:12.154718    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:12.658308    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:13.157836    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:13.657363    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:14.154740    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:14.662129    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:15.160004    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:15.654585    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:16.156021    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:16.656480    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:17.160704    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:17.656547    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:18.156754    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:18.658795    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:19.155885    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:19.655504    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:20.160434    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:20.655363    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:21.156938    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:21.654766    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:22.157150    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:22.655662    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:23.159037    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:23.657126    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:24.158200    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:24.654944    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:25.157384    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:25.656880    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:26.158158    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:26.656572    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:27.155605    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:27.655858    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:28.157464    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:28.656867    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:29.156975    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:29.654696    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:30.159376    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:30.653548    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:31.157942    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:31.657305    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:32.153722    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:32.656766    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:33.156865    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:33.656468    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:34.158709    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:34.655808    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:35.158493    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:35.660121    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:36.156479    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:36.656060    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:37.155784    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:37.657830    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:38.173835    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:38.657850    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:39.156071    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:39.657528    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:40.159150    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:40.654955    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:41.155863    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:41.657291    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:42.155204    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:42.656924    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:43.154392    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:43.656440    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:44.157765    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:44.657233    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:45.179748    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:45.655647    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:46.155740    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:46.655698    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:47.157766    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:47.655355    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:48.158969    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:48.657595    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:49.157245    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:49.662388    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:50.157630    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:50.657874    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:51.155743    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:51.654214    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:52.155338    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:52.657688    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:53.156430    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:53.655104    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:54.159466    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:54.656006    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:55.158227    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:55.659268    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:56.157930    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:56.657678    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:57.156642    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:57.653239    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:58.157236    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:58.655053    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:59.158413    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:55:59.658634    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:00.156023    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:00.656554    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:01.156462    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:01.661017    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:02.157799    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:02.657140    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:03.154792    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:03.654522    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:04.158493    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:04.658701    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:05.155181    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:05.655103    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:06.161872    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:06.661695    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:07.156135    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:07.656181    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:08.156360    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:08.655759    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:09.159324    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:09.656531    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:10.159051    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:10.659132    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:11.155508    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:11.656717    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:12.156677    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:12.655494    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:13.157888    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:13.656123    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:14.157229    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:14.664088    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:15.154893    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:15.663969    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:16.156964    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:16.659989    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:17.157512    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:17.656274    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:18.158037    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:18.656000    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:19.155825    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:19.662432    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:20.155430    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:20.657460    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:21.158620    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:21.657227    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:22.158287    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:22.653550    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:23.163139    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:23.656589    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:24.158911    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:24.658728    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:25.158117    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:25.657501    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:26.156834    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:26.655162    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:27.155513    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:27.657240    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:28.157125    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:28.675973    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:29.154448    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:29.655673    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:30.155675    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:30.654819    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:31.157685    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:31.654513    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:32.155845    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:32.655364    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:33.154873    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:33.657659    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:34.162701    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:34.655454    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:35.156204    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:35.657330    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:36.155480    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:36.654096    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:37.154324    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:37.653929    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:38.155778    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:38.657155    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:39.155429    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:39.658009    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:40.157958    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:40.658499    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:41.155866    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:41.664423    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:42.157328    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:42.654588    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:43.156868    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:43.654773    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:44.155838    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:44.656910    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:45.156977    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:45.656955    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:46.162519    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:46.657318    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:47.157439    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:47.657193    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:48.157026    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:48.679250    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:49.156701    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:49.656311    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:50.155066    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:50.657238    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:51.158828    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:51.655236    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:52.156817    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:52.656865    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:53.159632    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:53.655440    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:54.155312    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:54.656768    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:55.154611    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:55.660652    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:56.160234    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:56.665781    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:57.161653    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:57.657644    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:58.158743    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:58.653965    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:59.159868    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:56:59.661061    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:00.154404    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:00.656908    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:01.159672    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:01.655863    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:02.155350    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:02.657057    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:03.156961    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:03.655482    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:04.156736    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:04.654757    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:05.157925    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:05.656511    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:06.157967    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:06.653917    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:07.160561    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:07.657101    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:08.157730    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:08.656908    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:09.155858    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:09.660586    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:10.155369    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:10.659914    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:11.157912    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:11.665628    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:12.157681    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:12.656357    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:13.156842    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:13.656124    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:14.156031    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:14.655199    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:15.162245    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:15.658341    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:16.156941    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:16.652987    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:17.153709    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:17.657683    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:18.156432    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:18.656255    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:19.155714    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:19.661396    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:20.163592    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:20.656670    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:21.158851    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:21.654955    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:22.154920    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:22.657363    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:23.156008    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:23.654510    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:24.154717    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:24.656004    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:25.156921    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:25.656866    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:26.158274    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:26.657601    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:27.157371    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:27.657123    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:28.156575    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:28.654696    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:29.160636    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:29.657824    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:30.158129    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:30.662185    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:31.162157    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:31.656265    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:32.156842    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:32.659045    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:33.157942    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:33.667018    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:34.155406    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:34.660255    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:35.163181    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:35.656340    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:36.156012    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:36.656306    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:37.157627    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:37.656956    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:38.159163    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:38.654105    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:39.158308    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:39.658748    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:40.156719    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:40.659436    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:41.155789    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:41.655570    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:42.161602    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:42.655017    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:43.159460    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:43.657112    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:44.160833    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:44.657607    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:45.155416    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:45.659588    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:46.157889    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:46.663795    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:47.154253    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:47.655957    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:48.157635    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:48.659811    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:49.162141    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:49.664452    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:50.156315    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:50.654261    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:51.157262    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:51.658503    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:52.160770    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:52.654824    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:53.154818    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:53.654854    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:54.156377    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:54.657292    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:55.156133    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:55.655392    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:56.160325    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:56.655822    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:57.156350    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:57.659235    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:58.158896    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:58.656465    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:59.161883    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:57:59.656098    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:00.157648    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:00.655074    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:01.156203    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:01.652445    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:02.155121    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:02.657376    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:03.154822    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:03.660854    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:04.156702    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:04.654780    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:05.157860    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:05.660528    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:06.159049    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:06.660096    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:07.156758    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:07.659348    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:08.157356    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:08.655549    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:09.156750    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:09.656885    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:10.160393    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:10.657638    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:11.157182    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:11.661014    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:12.158692    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:12.655526    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:13.154693    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:13.663340    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:14.158707    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:14.657955    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:15.156320    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:15.658769    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:16.159661    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:16.660793    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:17.158432    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:17.654737    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:18.154891    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:18.653445    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:19.159260    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:19.655723    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:20.157823    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:20.654748    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:21.155122    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:21.654752    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:22.155963    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:22.654682    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:23.157664    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:23.660431    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:24.159174    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:24.663403    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:25.157959    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:25.657595    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:26.157256    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:26.656198    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:27.159409    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:27.658354    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:28.154801    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:28.660395    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:29.155348    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:29.658268    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:30.159587    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:30.654883    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:31.168849    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:31.655835    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:32.155871    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:32.655887    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:33.155804    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:33.656100    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:34.157379    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:34.655254    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:35.159088    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:35.656195    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:36.155392    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:36.656465    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:37.156601    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:37.655282    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:38.155758    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:38.663141    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:39.158837    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:39.664748    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:40.163536    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:40.657971    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:41.156325    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:41.658398    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:42.155341    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:42.655184    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:43.155304    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:43.655098    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:44.155078    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:44.657859    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:45.157941    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:45.657436    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:46.155235    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:46.656594    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:47.157904    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:47.654416    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:48.156826    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:48.658688    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:49.156297    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:49.664615    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:50.160595    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:50.654729    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:51.160150    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:51.656223    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:52.160248    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:52.655049    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:53.159994    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:53.657611    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:54.159545    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:54.659119    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:55.156960    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:55.656232    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:56.155994    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:56.657290    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:57.156236    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:57.653996    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:58.154749    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:58.655552    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:59.157881    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:59.660949    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:59.674369    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	I0310 19:58:59.674771    3088 node_ready.go:38] duration metric: took 4m0.0541379s to wait for WaitForNodeReady...
	I0310 19:58:59.680515    3088 out.go:129] 
	W0310 19:58:59.680879    3088 out.go:191] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: wait node ready: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: wait node ready: timed out waiting for the condition
	W0310 19:58:59.681349    3088 out.go:191] * 
	* 
	W0310 19:58:59.681773    3088 out.go:191] * If the above advice does not help, please let us know: 
	* If the above advice does not help, please let us know: 
	W0310 19:58:59.681773    3088 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 19:58:59.684535    3088 out.go:129] 

                                                
                                                
** /stderr **
multinode_test.go:270: failed to start cluster. args "out/minikube-windows-amd64.exe start -p multinode-20210310194323-6496 --wait=true -v=8 --alsologtostderr --driver=docker" : exit status 80
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect multinode-20210310194323-6496
helpers_test.go:231: (dbg) docker inspect multinode-20210310194323-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86f284706e15db565cb427f12276c0b374db713559daa226eba17b53d718b32f",
	        "Created": "2021-03-10T19:43:35.2771562Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 84916,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T19:53:12.8454366Z",
	            "FinishedAt": "2021-03-10T19:52:58.4796654Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/86f284706e15db565cb427f12276c0b374db713559daa226eba17b53d718b32f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86f284706e15db565cb427f12276c0b374db713559daa226eba17b53d718b32f/hostname",
	        "HostsPath": "/var/lib/docker/containers/86f284706e15db565cb427f12276c0b374db713559daa226eba17b53d718b32f/hosts",
	        "LogPath": "/var/lib/docker/containers/86f284706e15db565cb427f12276c0b374db713559daa226eba17b53d718b32f/86f284706e15db565cb427f12276c0b374db713559daa226eba17b53d718b32f-json.log",
	        "Name": "/multinode-20210310194323-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20210310194323-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20210310194323-6496",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/17dad76ef94033708c821a4694f69c3eeadfd43c8c615fa325848553a934d906-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/17dad76ef94033708c821a4694f69c3eeadfd43c8c615fa325848553a934d906/merged",
	                "UpperDir": "/var/lib/docker/overlay2/17dad76ef94033708c821a4694f69c3eeadfd43c8c615fa325848553a934d906/diff",
	                "WorkDir": "/var/lib/docker/overlay2/17dad76ef94033708c821a4694f69c3eeadfd43c8c615fa325848553a934d906/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20210310194323-6496",
	                "Source": "/var/lib/docker/volumes/multinode-20210310194323-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20210310194323-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20210310194323-6496",
	                "name.minikube.sigs.k8s.io": "multinode-20210310194323-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "361457681e3f6b46a81820c6e56c15d999f8949e9ef03b64345f6c0be5d59014",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55053"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55050"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55052"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55051"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/361457681e3f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20210310194323-6496": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.97"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "86f284706e15",
	                        "multinode-20210310194323-6496"
	                    ],
	                    "NetworkID": "db35a559c44f224f1d2e0cc7a7b2ea3c0846ab24307ca9abee3182b82d1360af",
	                    "EndpointID": "ff521f1df4111cfe8964b71b6a7ac5d928415ac439da88b7098b43bcb0fdecf4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:61",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20210310194323-6496 -n multinode-20210310194323-6496
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-20210310194323-6496 -n multinode-20210310194323-6496: (3.0178222s)
helpers_test.go:240: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 logs -n 25
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 logs -n 25: (16.2808853s)
helpers_test.go:248: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 19:53:13 UTC, end at Wed 2021-03-10 19:59:07 UTC. --
	* Mar 10 19:53:14 multinode-20210310194323-6496 systemd[1]: Starting Docker Application Container Engine...
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.439066600Z" level=info msg="Starting up"
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.447221000Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.447327100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.447393900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.447428100Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.453810200Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.454277000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.454408700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.454443300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.482842900Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 19:53:14 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:14.572072200Z" level=info msg="Loading containers: start."
	* Mar 10 19:53:15 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:15.219076600Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 19:53:15 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:15.453192300Z" level=info msg="Loading containers: done."
	* Mar 10 19:53:15 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:15.529161100Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 19:53:15 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:15.529338600Z" level=info msg="Daemon has completed initialization"
	* Mar 10 19:53:15 multinode-20210310194323-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 19:53:15 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:15.629756400Z" level=info msg="API listen on [::]:2376"
	* Mar 10 19:53:15 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:53:15.653943700Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 19:54:31 multinode-20210310194323-6496 dockerd[214]: time="2021-03-10T19:54:31.465258000Z" level=info msg="ignoring event" container=d68b4f030db8268024181b638748bb60133b7bdc59f7b054be0fee8ee18bcc27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID
	* ef63b925bf125       85069258b98ac                                                                              4 minutes ago       Running             storage-provisioner       2                   8b0098e3e21b4
	* d5ecee0877820       bfe3a36ebd252                                                                              4 minutes ago       Running             coredns                   1                   1a7490d85eea4
	* 8a66e5ff425e7       2b60427ffa5fe                                                                              4 minutes ago       Running             kindnet-cni               1                   08ebf2eca5023
	* 92fbed57ee1b4       43154ddb57a83                                                                              4 minutes ago       Running             kube-proxy                1                   a7277e6fd4c22
	* d68b4f030db82       85069258b98ac                                                                              4 minutes ago       Exited              storage-provisioner       1                   8b0098e3e21b4
	* 14985ca120af4       ed2c44fbdd78b                                                                              5 minutes ago       Running             kube-scheduler            1                   003a278824212
	* 90d3393d36f2d       a8c2fdb8bf76e                                                                              5 minutes ago       Running             kube-apiserver            1                   5dc45e3461c66
	* 3a18567469081       a27166429d98e                                                                              5 minutes ago       Running             kube-controller-manager   1                   6ecf8664c87f5
	* 106b8e4196543       0369cf4303ffd                                                                              5 minutes ago       Running             etcd                      1                   fd4c873607e22
	* 7ce422d22c309       bfe3a36ebd252                                                                              12 minutes ago      Exited              coredns                   0                   cf643b2bb13a3
	* 8e25ad7ac8785       kindest/kindnetd@sha256:fad5da51341b25f46d6782cc59c2a3b0ca5c9dc18078d2192b488823cf9a69a6   12 minutes ago      Exited              kindnet-cni               0                   696c45777987e
	* b285d1fca513f       43154ddb57a83                                                                              13 minutes ago      Exited              kube-proxy                0                   af5652213b63c
	* c9e9e409c8d12       a27166429d98e                                                                              13 minutes ago      Exited              kube-controller-manager   0                   fcdd27401671f
	* 90d50f811eb4f       ed2c44fbdd78b                                                                              13 minutes ago      Exited              kube-scheduler            0                   62c13f20a591d
	* 11af52e50d919       a8c2fdb8bf76e                                                                              13 minutes ago      Exited              kube-apiserver            0                   b19167915dae7
	* 5e3898b62288b       0369cf4303ffd                                                                              13 minutes ago      Exited              etcd                      0                   904c19c6b4862
	* 
	* ==> coredns [7ce422d22c30] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* 
	* ==> coredns [d5ecee087782] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210310194323-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=multinode-20210310194323-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=multinode-20210310194323-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T19_45_37_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 19:45:29 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  multinode-20210310194323-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 19:59:03 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 19:59:02 +0000   Wed, 10 Mar 2021 19:45:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 19:59:02 +0000   Wed, 10 Mar 2021 19:45:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 19:59:02 +0000   Wed, 10 Mar 2021 19:45:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 19:59:02 +0000   Wed, 10 Mar 2021 19:45:47 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.97
	*   Hostname:    multinode-20210310194323-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                6ee424b0-c77c-43fd-9ea0-0bd449298717
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (8 in total)
	*   Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-74ff55c5b-jq4n9                                  100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	*   kube-system                 etcd-multinode-20210310194323-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	*   kube-system                 kindnet-pdlkw                                            100m (2%)     100m (2%)   50Mi (0%)        50Mi (0%)      13m
	*   kube-system                 kube-apiserver-multinode-20210310194323-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         13m
	*   kube-system                 kube-controller-manager-multinode-20210310194323-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	*   kube-system                 kube-proxy-7rchm                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	*   kube-system                 kube-scheduler-multinode-20210310194323-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	*   kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                850m (21%)  100m (2%)
	*   memory             220Mi (1%)  220Mi (1%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                    From        Message
	*   ----    ------                   ----                   ----        -------
	*   Normal  NodeHasSufficientMemory  13m (x7 over 13m)      kubelet     Node multinode-20210310194323-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    13m (x7 over 13m)      kubelet     Node multinode-20210310194323-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     13m (x6 over 13m)      kubelet     Node multinode-20210310194323-6496 status is now: NodeHasSufficientPID
	*   Normal  Starting                 13m                    kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  13m                    kubelet     Node multinode-20210310194323-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    13m                    kubelet     Node multinode-20210310194323-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     13m                    kubelet     Node multinode-20210310194323-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             13m                    kubelet     Node multinode-20210310194323-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  13m                    kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                13m                    kubelet     Node multinode-20210310194323-6496 status is now: NodeReady
	*   Normal  Starting                 13m                    kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 5m29s                  kubelet     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  5m29s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientMemory  5m28s (x8 over 5m29s)  kubelet     Node multinode-20210310194323-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m29s)  kubelet     Node multinode-20210310194323-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     5m28s (x7 over 5m29s)  kubelet     Node multinode-20210310194323-6496 status is now: NodeHasSufficientPID
	*   Normal  Starting                 4m58s                  kube-proxy  Starting kube-proxy.
	* 
	* 
	* Name:               multinode-20210310194323-6496-m02
	* Roles:              <none>
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=multinode-20210310194323-6496-m02
	*                     kubernetes.io/os=linux
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 19:48:31 +0000
	* Taints:             node.kubernetes.io/unreachable:NoExecute
	*                     node.kubernetes.io/unreachable:NoSchedule
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  multinode-20210310194323-6496-m02
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 19:52:42 +0000
	* Conditions:
	*   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	*   ----             ------    -----------------                 ------------------                ------              -------
	*   MemoryPressure   Unknown   Wed, 10 Mar 2021 19:49:34 +0000   Wed, 10 Mar 2021 19:54:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	*   DiskPressure     Unknown   Wed, 10 Mar 2021 19:49:34 +0000   Wed, 10 Mar 2021 19:54:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	*   PIDPressure      Unknown   Wed, 10 Mar 2021 19:49:34 +0000   Wed, 10 Mar 2021 19:54:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	*   Ready            Unknown   Wed, 10 Mar 2021 19:49:34 +0000   Wed, 10 Mar 2021 19:54:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	* Addresses:
	*   InternalIP:  192.168.49.3
	*   Hostname:    multinode-20210310194323-6496-m02
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                a0ef8f5e-af8b-424d-a25a-07a7b4747a65
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.1.0/24
	* PodCIDRs:                     10.244.1.0/24
	* Non-terminated Pods:          (2 in total)
	*   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	*   kube-system                 kindnet-xn5hd       100m (2%)     100m (2%)   50Mi (0%)        50Mi (0%)      10m
	*   kube-system                 kube-proxy-tdzlb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                100m (2%)  100m (2%)
	*   memory             50Mi (0%)  50Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age   From        Message
	*   ----    ------                   ----  ----        -------
	*   Normal  Starting                 10m   kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  10m   kubelet     Node multinode-20210310194323-6496-m02 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    10m   kubelet     Node multinode-20210310194323-6496-m02 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     10m   kubelet     Node multinode-20210310194323-6496-m02 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  10m   kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                10m   kubelet     Node multinode-20210310194323-6496-m02 status is now: NodeReady
	*   Normal  Starting                 10m   kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [106b8e419654] <==
	* 2021-03-10 19:55:01.475618 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:55:11.476725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:55:21.476041 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:55:31.476342 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:55:41.475763 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:55:51.475561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:56:01.474944 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:56:11.476141 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:56:21.474491 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:56:31.475609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:56:41.475536 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:56:51.475168 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:57:01.475086 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:57:11.476096 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:57:21.474795 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:57:31.474161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:57:41.476530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:57:51.474628 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:58:01.475120 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:58:11.475114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:58:21.474037 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:58:31.474000 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:58:41.473959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:58:51.475951 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:59:01.474215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> etcd [5e3898b62288] <==
	* 2021-03-10 19:50:29.178133 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true " with result "range_response_count:0 size:7" took too long (450.4414ms) to execute
	* 2021-03-10 19:50:29.204205 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:50:29.220212 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1127" took too long (380.3678ms) to execute
	* 2021-03-10 19:50:29.220754 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (373.5974ms) to execute
	* 2021-03-10 19:50:38.798947 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:50:48.786793 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:50:55.969042 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (199.6447ms) to execute
	* 2021-03-10 19:50:55.969839 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (116.0011ms) to execute
	* 2021-03-10 19:50:58.798628 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:51:08.784642 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:51:12.134093 W | etcdserver: request "header:<ID:10490704450842146914 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:828 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1039 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (345.2272ms) to execute
	* 2021-03-10 19:51:12.137625 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (295.5991ms) to execute
	* 2021-03-10 19:51:12.141013 W | etcdserver: read-only range request "key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true " with result "range_response_count:0 size:7" took too long (232.9681ms) to execute
	* 2021-03-10 19:51:18.801937 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:51:28.782625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:51:38.790098 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:51:48.787047 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:51:58.788401 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:52:08.789670 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:52:18.788421 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:52:28.787355 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:52:38.789330 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 19:52:47.366123 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2021/03/10 19:52:47 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 2021-03-10 19:52:47.451140 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> kernel <==
	*  19:59:11 up 59 min,  0 users,  load average: 0.79, 2.53, 3.97
	* Linux multinode-20210310194323-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [11af52e50d91] <==
	* W0310 19:52:56.844453       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:56.931125       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:56.958181       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:56.961061       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:56.961091       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:56.969721       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.026106       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.031838       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.050368       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.089536       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.094401       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.123079       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.151202       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.153013       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.155707       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.265544       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.271141       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.319655       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.399547       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.412300       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.559117       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.566046       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.573678       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.578669       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 19:52:57.667319       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-apiserver [90d3393d36f2] <==
	* I0310 19:54:56.313134       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:54:56.313290       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:54:56.313335       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:54:56.792611       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	* I0310 19:54:56.821577       1 controller.go:609] quota admission added evaluator for: deployments.apps
	* I0310 19:54:57.060749       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I0310 19:54:57.080852       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* I0310 19:55:27.231061       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:55:27.231198       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:55:27.231218       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:56:00.330295       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:56:00.330606       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:56:00.330799       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:56:35.496690       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:56:35.497212       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:56:35.497237       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:57:18.769206       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:57:18.769319       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:57:18.769351       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:58:03.404543       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:58:03.404729       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:58:03.404763       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 19:58:39.977379       1 client.go:360] parsed scheme: "passthrough"
	* I0310 19:58:39.977526       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 19:58:39.977549       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [3a1856746908] <==
	* W0310 19:54:13.044957       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210310194323-6496. Assuming now as a timestamp.
	* W0310 19:54:13.045027       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210310194323-6496-m02. Assuming now as a timestamp.
	* I0310 19:54:13.045331       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I0310 19:54:13.045848       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* I0310 19:54:13.045989       1 event.go:291] "Event occurred" object="multinode-20210310194323-6496-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210310194323-6496-m02 event: Registered Node multinode-20210310194323-6496-m02 in Controller"
	* I0310 19:54:13.046030       1 event.go:291] "Event occurred" object="multinode-20210310194323-6496" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210310194323-6496 event: Registered Node multinode-20210310194323-6496 in Controller"
	* I0310 19:54:13.067278       1 shared_informer.go:247] Caches are synced for attach detach 
	* I0310 19:54:13.138567       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 19:54:13.168265       1 shared_informer.go:247] Caches are synced for deployment 
	* I0310 19:54:13.229943       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 19:54:13.230028       1 disruption.go:339] Sending events to api server.
	* I0310 19:54:13.319567       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 19:54:13.547925       1 request.go:655] Throttling request took 1.0002509s, request: GET:https://192.168.49.97:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	* I0310 19:54:13.591366       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 19:54:13.591394       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 19:54:13.620395       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 19:54:14.352480       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	* I0310 19:54:14.352611       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 19:54:53.065582       1 event.go:291] "Event occurred" object="multinode-20210310194323-6496-m02" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20210310194323-6496-m02 status is now: NodeNotReady"
	* I0310 19:54:53.083731       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-tdzlb" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 19:54:53.103732       1 event.go:291] "Event occurred" object="kube-system/kindnet-xn5hd" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 19:55:12.990175       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kube-proxy-gjbjj
	* I0310 19:55:13.011150       1 gc_controller.go:186] Forced deletion of orphaned Pod kube-system/kube-proxy-gjbjj succeeded
	* I0310 19:55:13.011240       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kindnet-vvk6s
	* I0310 19:55:13.030563       1 gc_controller.go:186] Forced deletion of orphaned Pod kube-system/kindnet-vvk6s succeeded
	* 
	* ==> kube-controller-manager [c9e9e409c8d1] <==
	* I0310 19:45:49.636652       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 19:45:49.647815       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-jq4n9"
	* E0310 19:45:49.954522       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"90571c9c-bb10-46dd-bca6-868a806d1f29", ResourceVersion:"301", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751002337, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\
"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210220-5b7e6d01\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exis
ts\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0016007c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0016007e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001600800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet",
"tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001600820), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile
:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001600860), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolum
eClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0016008c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEP
ersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSourc
e)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210220-5b7e6d01", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001600920)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001600b00)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)},
s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil),
TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0002b93e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016b23a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005f3ab0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassNam
e:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0002f3960)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0016b23f0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	* E0310 19:45:49.960412       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"f0801a1f-9579-45c7-b53f-1bdbf26deb30", ResourceVersion:"280", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751002334, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0016006a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0016006c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v
1.LabelSelector)(0xc0016006e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.
GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000bf6580), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0016
00700), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolum
eSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001600720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil
), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001600760)}}, Resources:v1
.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0002b9380), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), Restart
Policy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016b2128), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005f3a40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runti
meClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0002f3958)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0016b2178)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* E0310 19:45:50.078475       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"f0801a1f-9579-45c7-b53f-1bdbf26deb30", ResourceVersion:"403", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751002334, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00033ce80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00033cf00)}, v1.ManagedFieldsEntry{Manager:"kube-
controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00033cf80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00033cfe0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00033d060), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElast
icBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001613340), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSo
urce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00033d1a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolume
Source)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00033d220), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil)
, Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil)
, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00033d400)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:
"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0011f6180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000f88cf8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000343d50), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int6
4)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00048af48)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000f88d48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest
version and try again
	* I0310 19:45:50.831869       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	* I0310 19:45:50.896046       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-4bkkh"
	* W0310 19:48:31.371909       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210310194323-6496-m02" does not exist
	* I0310 19:48:31.417909       1 range_allocator.go:373] Set node multinode-20210310194323-6496-m02 PodCIDR to [10.244.1.0/24]
	* I0310 19:48:31.462924       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tdzlb"
	* I0310 19:48:31.487494       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xn5hd"
	* I0310 19:48:34.006567       1 event.go:291] "Event occurred" object="multinode-20210310194323-6496-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210310194323-6496-m02 event: Registered Node multinode-20210310194323-6496-m02 in Controller"
	* W0310 19:48:34.006885       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210310194323-6496-m02. Assuming now as a timestamp.
	* W0310 19:50:25.733439       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210310194323-6496-m03" does not exist
	* I0310 19:50:25.806588       1 range_allocator.go:373] Set node multinode-20210310194323-6496-m03 PodCIDR to [10.244.2.0/24]
	* I0310 19:50:25.836961       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vvk6s"
	* I0310 19:50:25.861275       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gjbjj"
	* E0310 19:50:25.950512       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"f0801a1f-9579-45c7-b53f-1bdbf26deb30", ResourceVersion:"673", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751002334, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e2b140), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e2b1a0)}, v1.ManagedFieldsEntry{Manager:"kube-
controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000e2b260), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000e2b320)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000e2b380), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElast
icBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc002030cc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSo
urce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e2b3e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolume
Source)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000e2b440), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil)
, Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil)
, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000e2b500)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:
"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00122b200), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00199e4f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000343420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int6
4)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00048b168)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00199e548)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest
version and try again
	* E0310 19:50:26.063449       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"f0801a1f-9579-45c7-b53f-1bdbf26deb30", ResourceVersion:"763", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751002334, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0019e7040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019e7060)}, v1.ManagedFieldsEntry{Manager:"kube-
controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0019e7080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019e70a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019e70c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElast
icBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc002230380), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSo
urce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0019e70e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolume
Source)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0019e7100), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil)
, Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil)
, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019e7140)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:
"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00210d7a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002386148), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004f1650), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int6
4)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00011a8e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002386288)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest
version and try again
	* W0310 19:50:29.124544       1 node_lifecycle_controller.go:1044] Missing timestamp for Node multinode-20210310194323-6496-m03. Assuming now as a timestamp.
	* I0310 19:50:29.125091       1 event.go:291] "Event occurred" object="multinode-20210310194323-6496-m03" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210310194323-6496-m03 event: Registered Node multinode-20210310194323-6496-m03 in Controller"
	* I0310 19:51:59.250959       1 event.go:291] "Event occurred" object="multinode-20210310194323-6496-m03" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20210310194323-6496-m03 status is now: NodeNotReady"
	* I0310 19:51:59.316495       1 event.go:291] "Event occurred" object="kube-system/kindnet-vvk6s" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 19:51:59.373220       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-gjbjj" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I0310 19:52:24.413516       1 event.go:291] "Event occurred" object="multinode-20210310194323-6496-m03" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-20210310194323-6496-m03 event: Removing Node multinode-20210310194323-6496-m03 from Controller"
	* 
	* ==> kube-proxy [92fbed57ee1b] <==
	* I0310 19:54:11.251861       1 node.go:172] Successfully retrieved node IP: 192.168.49.97
	* I0310 19:54:11.251989       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.97), assume IPv4 operation
	* W0310 19:54:11.381970       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 19:54:11.387397       1 server_others.go:185] Using iptables Proxier.
	* I0310 19:54:11.388533       1 server.go:650] Version: v1.20.2
	* I0310 19:54:11.396048       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 19:54:11.396633       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 19:54:11.396871       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 19:54:11.431315       1 config.go:315] Starting service config controller
	* I0310 19:54:11.431362       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 19:54:11.435806       1 config.go:224] Starting endpoint slice config controller
	* I0310 19:54:11.435825       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 19:54:11.533588       1 shared_informer.go:247] Caches are synced for service config 
	* I0310 19:54:11.536758       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-proxy [b285d1fca513] <==
	* I0310 19:46:00.886724       1 node.go:172] Successfully retrieved node IP: 192.168.49.97
	* I0310 19:46:00.886872       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.97), assume IPv4 operation
	* W0310 19:46:01.413264       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 19:46:01.431956       1 server_others.go:185] Using iptables Proxier.
	* I0310 19:46:01.470882       1 server.go:650] Version: v1.20.2
	* I0310 19:46:01.488454       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 19:46:01.489462       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 19:46:01.490151       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 19:46:01.495243       1 config.go:315] Starting service config controller
	* I0310 19:46:01.495293       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 19:46:01.497592       1 config.go:224] Starting endpoint slice config controller
	* I0310 19:46:01.497608       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 19:46:01.595623       1 shared_informer.go:247] Caches are synced for service config 
	* I0310 19:46:01.605149       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-scheduler [14985ca120af] <==
	* I0310 19:53:49.545362       1 serving.go:331] Generated self-signed cert in-memory
	* W0310 19:53:57.273059       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W0310 19:53:57.273182       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W0310 19:53:57.273230       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W0310 19:53:57.273248       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I0310 19:53:57.532325       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I0310 19:53:57.532741       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 19:53:57.532994       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 19:53:57.533036       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I0310 19:53:57.634029       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [90d50f811eb4] <==
	* E0310 19:45:29.567056       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 19:45:29.567482       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 19:45:29.567739       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 19:45:29.630797       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 19:45:29.638698       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 19:45:29.638851       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 19:45:29.639282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 19:45:29.639422       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 19:45:29.639549       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 19:45:29.641632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 19:45:29.641869       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 19:45:29.648693       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 19:45:30.452609       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 19:45:30.492439       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 19:45:30.592901       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 19:45:30.596710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 19:45:30.631763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 19:45:30.631916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 19:45:30.695745       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 19:45:30.732937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 19:45:30.733815       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 19:45:30.993319       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 19:45:31.206649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 19:45:31.207550       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* I0310 19:45:33.250793       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 19:53:13 UTC, end at Wed 2021-03-10 19:59:13 UTC. --
	* Mar 10 19:54:07 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:07.621238    1350 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-7vcv6" (UniqueName: "kubernetes.io/secret/6247bab9-80ef-438a-806a-0c19ed9c39a2-kube-proxy-token-7vcv6") pod "kube-proxy-7rchm" (UID: "6247bab9-80ef-438a-806a-0c19ed9c39a2")
	* Mar 10 19:54:07 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:07.621298    1350 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-24kpw" (UniqueName: "kubernetes.io/secret/bdcc23df-7069-4a7a-8cdc-89b12e006bf6-kindnet-token-24kpw") pod "kindnet-pdlkw" (UID: "bdcc23df-7069-4a7a-8cdc-89b12e006bf6")
	* Mar 10 19:54:07 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:07.621348    1350 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/75d9e0a4-c70e-445c-af14-4db9ef305719-tmp") pod "storage-provisioner" (UID: "75d9e0a4-c70e-445c-af14-4db9ef305719")
	* Mar 10 19:54:07 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:07.621384    1350 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/59fcc5d5-1d12-409a-88d8-46674adeb0e7-config-volume") pod "coredns-74ff55c5b-jq4n9" (UID: "59fcc5d5-1d12-409a-88d8-46674adeb0e7")
	* Mar 10 19:54:07 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:07.621418    1350 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/bdcc23df-7069-4a7a-8cdc-89b12e006bf6-lib-modules") pod "kindnet-pdlkw" (UID: "bdcc23df-7069-4a7a-8cdc-89b12e006bf6")
	* Mar 10 19:54:07 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:07.621450    1350 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/6247bab9-80ef-438a-806a-0c19ed9c39a2-xtables-lock") pod "kube-proxy-7rchm" (UID: "6247bab9-80ef-438a-806a-0c19ed9c39a2")
	* Mar 10 19:54:07 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:07.621482    1350 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/bdcc23df-7069-4a7a-8cdc-89b12e006bf6-xtables-lock") pod "kindnet-pdlkw" (UID: "bdcc23df-7069-4a7a-8cdc-89b12e006bf6")
	* Mar 10 19:54:07 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:07.621505    1350 reconciler.go:157] Reconciler: start to sync state
	* Mar 10 19:54:07 multinode-20210310194323-6496 kubelet[1350]: W0310 19:54:07.879699    1350 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "cf643b2bb13a3e450120ab75f95954ad53014016245e391774617d9f1cea122f"
	* Mar 10 19:54:09 multinode-20210310194323-6496 kubelet[1350]: W0310 19:54:09.572528    1350 pod_container_deletor.go:79] Container "a7277e6fd4c223f002182c7d4e96b0cff589f9efe06d0f06c3cd86f2d4bc1d17" not found in pod's containers
	* Mar 10 19:54:10 multinode-20210310194323-6496 kubelet[1350]: W0310 19:54:10.833481    1350 pod_container_deletor.go:79] Container "08ebf2eca5023fbc85ce5910bb1c6004505e1c689696c003e626b60bb353d239" not found in pod's containers
	* Mar 10 19:54:10 multinode-20210310194323-6496 kubelet[1350]: W0310 19:54:10.935426    1350 pod_container_deletor.go:79] Container "1a7490d85eea48445525136d2d6dff9655f83514e62f80e2ac0215be0cf33b7d" not found in pod's containers
	* Mar 10 19:54:11 multinode-20210310194323-6496 kubelet[1350]: W0310 19:54:11.046932    1350 pod_container_deletor.go:79] Container "8b0098e3e21b446ff259b24f2e80dd8d807d1315059f74805923a52a87911083" not found in pod's containers
	* Mar 10 19:54:11 multinode-20210310194323-6496 kubelet[1350]: E0310 19:54:11.066995    1350 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Mar 10 19:54:11 multinode-20210310194323-6496 kubelet[1350]: E0310 19:54:11.067882    1350 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Mar 10 19:54:21 multinode-20210310194323-6496 kubelet[1350]: E0310 19:54:21.094745    1350 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Mar 10 19:54:21 multinode-20210310194323-6496 kubelet[1350]: E0310 19:54:21.094787    1350 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Mar 10 19:54:31 multinode-20210310194323-6496 kubelet[1350]: E0310 19:54:31.130318    1350 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Mar 10 19:54:31 multinode-20210310194323-6496 kubelet[1350]: E0310 19:54:31.130843    1350 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Mar 10 19:54:32 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:32.318145    1350 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6d8885bf62c160e117bc39a7539ba8bda5fc9a9c464d555d3430fd17e6464a7
	* Mar 10 19:54:32 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:32.318711    1350 scope.go:95] [topologymanager] RemoveContainer - Container ID: d68b4f030db8268024181b638748bb60133b7bdc59f7b054be0fee8ee18bcc27
	* Mar 10 19:54:32 multinode-20210310194323-6496 kubelet[1350]: E0310 19:54:32.319195    1350 pod_workers.go:191] Error syncing pod 75d9e0a4-c70e-445c-af14-4db9ef305719 ("storage-provisioner_kube-system(75d9e0a4-c70e-445c-af14-4db9ef305719)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75d9e0a4-c70e-445c-af14-4db9ef305719)"
	* Mar 10 19:54:46 multinode-20210310194323-6496 kubelet[1350]: I0310 19:54:46.360793    1350 scope.go:95] [topologymanager] RemoveContainer - Container ID: d68b4f030db8268024181b638748bb60133b7bdc59f7b054be0fee8ee18bcc27
	* Mar 10 19:58:40 multinode-20210310194323-6496 kubelet[1350]: W0310 19:58:40.560790    1350 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 19:58:40 multinode-20210310194323-6496 kubelet[1350]: W0310 19:58:40.562268    1350 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* 
	* ==> storage-provisioner [d68b4f030db8] <==
	* I0310 19:54:10.372304       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* F0310 19:54:31.385812       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	* 
	* ==> storage-provisioner [ef63b925bf12] <==
	* I0310 19:54:46.885919       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 19:54:46.950629       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 19:54:46.951192       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 19:55:04.515134       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 19:55:04.515701       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210310194323-6496_1eb7ae58-b571-41d3-a0ba-abf69db81f31!
	* I0310 19:55:04.517030       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0619d38e-6e6b-41b3-9ad0-61f49072d40c", APIVersion:"v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210310194323-6496_1eb7ae58-b571-41d3-a0ba-abf69db81f31 became leader
	* I0310 19:55:04.617961       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210310194323-6496_1eb7ae58-b571-41d3-a0ba-abf69db81f31!
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                  |                Profile                |          User           | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------|---------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210310191609-6496        | functional-20210310191609-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:56 GMT | Wed, 10 Mar 2021 19:24:57 GMT |
	|         | update-context                        |                                       |                         |         |                               |                               |
	|         | --alsologtostderr -v=2                |                                       |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496        | functional-20210310191609-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:56 GMT | Wed, 10 Mar 2021 19:24:57 GMT |
	|         | update-context                        |                                       |                         |         |                               |                               |
	|         | --alsologtostderr -v=2                |                                       |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496        | functional-20210310191609-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:30 GMT | Wed, 10 Mar 2021 19:25:00 GMT |
	|         | logs                                  |                                       |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496        | functional-20210310191609-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:24:59 GMT | Wed, 10 Mar 2021 19:25:12 GMT |
	|         | logs -n 25                            |                                       |                         |         |                               |                               |
	| -p      | functional-20210310191609-6496        | functional-20210310191609-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:26:01 GMT | Wed, 10 Mar 2021 19:26:05 GMT |
	|         | service list                          |                                       |                         |         |                               |                               |
	| delete  | -p                                    | functional-20210310191609-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:29:50 GMT | Wed, 10 Mar 2021 19:30:03 GMT |
	|         | functional-20210310191609-6496        |                                       |                         |         |                               |                               |
	| start   | -p                                    | json-output-20210310193003-6496       | testUser                | v1.18.1 | Wed, 10 Mar 2021 19:30:04 GMT | Wed, 10 Mar 2021 19:33:50 GMT |
	|         | json-output-20210310193003-6496       |                                       |                         |         |                               |                               |
	|         | --output=json --user=testUser         |                                       |                         |         |                               |                               |
	|         | --memory=2200 --wait=true             |                                       |                         |         |                               |                               |
	|         | --driver=docker                       |                                       |                         |         |                               |                               |
	| pause   | -p                                    | json-output-20210310193003-6496       | testUser                | v1.18.1 | Wed, 10 Mar 2021 19:33:51 GMT | Wed, 10 Mar 2021 19:33:54 GMT |
	|         | json-output-20210310193003-6496       |                                       |                         |         |                               |                               |
	|         | --output=json --user=testUser         |                                       |                         |         |                               |                               |
	| unpause | -p                                    | json-output-20210310193003-6496       | testUser                | v1.18.1 | Wed, 10 Mar 2021 19:33:54 GMT | Wed, 10 Mar 2021 19:33:57 GMT |
	|         | json-output-20210310193003-6496       |                                       |                         |         |                               |                               |
	|         | --output=json --user=testUser         |                                       |                         |         |                               |                               |
	| stop    | -p                                    | json-output-20210310193003-6496       | testUser                | v1.18.1 | Wed, 10 Mar 2021 19:33:57 GMT | Wed, 10 Mar 2021 19:34:12 GMT |
	|         | json-output-20210310193003-6496       |                                       |                         |         |                               |                               |
	|         | --output=json --user=testUser         |                                       |                         |         |                               |                               |
	| delete  | -p                                    | json-output-20210310193003-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:34:12 GMT | Wed, 10 Mar 2021 19:34:22 GMT |
	|         | json-output-20210310193003-6496       |                                       |                         |         |                               |                               |
	| delete  | -p                                    | json-output-error-20210310193422-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:34:23 GMT | Wed, 10 Mar 2021 19:34:25 GMT |
	|         | json-output-error-20210310193422-6496 |                                       |                         |         |                               |                               |
	| start   | -p                                    | docker-network-20210310193425-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:34:25 GMT | Wed, 10 Mar 2021 19:37:14 GMT |
	|         | docker-network-20210310193425-6496    |                                       |                         |         |                               |                               |
	|         | --network=                            |                                       |                         |         |                               |                               |
	| delete  | -p                                    | docker-network-20210310193425-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:37:15 GMT | Wed, 10 Mar 2021 19:37:26 GMT |
	|         | docker-network-20210310193425-6496    |                                       |                         |         |                               |                               |
	| start   | -p                                    | docker-network-20210310193726-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:37:26 GMT | Wed, 10 Mar 2021 19:40:13 GMT |
	|         | docker-network-20210310193726-6496    |                                       |                         |         |                               |                               |
	|         | --network=bridge                      |                                       |                         |         |                               |                               |
	| delete  | -p                                    | docker-network-20210310193726-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:40:14 GMT | Wed, 10 Mar 2021 19:40:24 GMT |
	|         | docker-network-20210310193726-6496    |                                       |                         |         |                               |                               |
	| start   | -p                                    | existing-network-20210310194026-6496  | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:40:27 GMT | Wed, 10 Mar 2021 19:43:11 GMT |
	|         | existing-network-20210310194026-6496  |                                       |                         |         |                               |                               |
	|         | --network=existing-network            |                                       |                         |         |                               |                               |
	| delete  | -p                                    | existing-network-20210310194026-6496  | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:43:11 GMT | Wed, 10 Mar 2021 19:43:22 GMT |
	|         | existing-network-20210310194026-6496  |                                       |                         |         |                               |                               |
	| start   | -p                                    | multinode-20210310194323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:43:23 GMT | Wed, 10 Mar 2021 19:49:13 GMT |
	|         | multinode-20210310194323-6496         |                                       |                         |         |                               |                               |
	|         | --wait=true --memory=2200             |                                       |                         |         |                               |                               |
	|         | --nodes=2 -v=8                        |                                       |                         |         |                               |                               |
	|         | --alsologtostderr                     |                                       |                         |         |                               |                               |
	|         | --driver=docker                       |                                       |                         |         |                               |                               |
	| node    | add -p                                | multinode-20210310194323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:49:19 GMT | Wed, 10 Mar 2021 19:51:08 GMT |
	|         | multinode-20210310194323-6496         |                                       |                         |         |                               |                               |
	|         | -v 3 --alsologtostderr                |                                       |                         |         |                               |                               |
	| profile | list --output json                    | minikube                              | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:51:15 GMT | Wed, 10 Mar 2021 19:51:17 GMT |
	| -p      | multinode-20210310194323-6496         | multinode-20210310194323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:51:18 GMT | Wed, 10 Mar 2021 19:51:21 GMT |
	|         | node stop m03                         |                                       |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496         | multinode-20210310194323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:51:33 GMT | Wed, 10 Mar 2021 19:52:13 GMT |
	|         | node start m03                        |                                       |                         |         |                               |                               |
	|         | --alsologtostderr                     |                                       |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496         | multinode-20210310194323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:52:21 GMT | Wed, 10 Mar 2021 19:52:39 GMT |
	|         | node delete m03                       |                                       |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496         | multinode-20210310194323-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:52:44 GMT | Wed, 10 Mar 2021 19:53:02 GMT |
	|         | stop                                  |                                       |                         |         |                               |                               |
	|---------|---------------------------------------|---------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 19:53:06
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 19:53:06.460346    3088 out.go:239] Setting OutFile to fd 3012 ...
	* I0310 19:53:06.461366    3088 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 19:53:06.461366    3088 out.go:252] Setting ErrFile to fd 2592...
	* I0310 19:53:06.461366    3088 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 19:53:06.473364    3088 out.go:246] Setting JSON to false
	* I0310 19:53:06.475367    3088 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":31452,"bootTime":1615374534,"procs":108,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 19:53:06.475367    3088 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 19:53:06.482307    3088 out.go:129] * [multinode-20210310194323-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 19:53:06.487244    3088 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 19:53:06.491719    3088 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 19:53:06.900224    3088 docker.go:119] docker version: linux-20.10.2
	* I0310 19:53:06.909443    3088 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 19:53:07.697182    3088 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:46 SystemTime:2021-03-10 19:53:07.3321994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 19:53:07.702222    3088 out.go:129] * Using the docker driver based on existing profile
	* I0310 19:53:07.702496    3088 start.go:276] selected driver: docker
	* I0310 19:53:07.702496    3088 start.go:718] validating driver "docker" against &{Name:multinode-20210310194323-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-20210310194323-6496 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	* I0310 19:53:07.702496    3088 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 19:53:07.722578    3088 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 19:53:08.461485    3088 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:46 SystemTime:2021-03-10 19:53:08.1444659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 19:53:10.280449    3088 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 19:53:10.280449    3088 start_flags.go:398] config:
	* {Name:multinode-20210310194323-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-20210310194323-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	* I0310 19:53:10.284476    3088 out.go:129] * Starting control plane node multinode-20210310194323-6496 in cluster multinode-20210310194323-6496
	* I0310 19:53:10.776656    3088 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 19:53:10.777152    3088 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 19:53:10.777152    3088 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 19:53:10.777152    3088 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 19:53:10.777600    3088 cache.go:54] Caching tarball of preloaded images
	* I0310 19:53:10.778036    3088 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 19:53:10.778036    3088 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 19:53:10.778409    3088 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\config.json ...
	* I0310 19:53:10.783007    3088 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 19:53:10.783378    3088 start.go:313] acquiring machines lock for multinode-20210310194323-6496: {Name:mkc0311afbbefcdbd0a19dc4fb181202ea9bd5e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:53:10.783799    3088 start.go:317] acquired machines lock for "multinode-20210310194323-6496" in 420.4??s
	* I0310 19:53:10.783799    3088 start.go:93] Skipping create...Using existing machine configuration
	* I0310 19:53:10.784170    3088 fix.go:55] fixHost starting: 
	* I0310 19:53:10.799524    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format=
	* I0310 19:53:11.279536    3088 fix.go:108] recreateIfNeeded on multinode-20210310194323-6496: state=Stopped err=<nil>
	* W0310 19:53:11.280031    3088 fix.go:134] unexpected machine state, will restart: <nil>
	* I0310 19:53:11.283170    3088 out.go:129] * Restarting existing docker container for "multinode-20210310194323-6496" ...
	* I0310 19:53:11.291628    3088 cli_runner.go:115] Run: docker start multinode-20210310194323-6496
	* I0310 19:53:12.921527    3088 cli_runner.go:168] Completed: docker start multinode-20210310194323-6496: (1.6299017s)
	* I0310 19:53:12.929715    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format=
	* I0310 19:53:13.478245    3088 kic.go:410] container "multinode-20210310194323-6496" state is running.
	* I0310 19:53:13.492460    3088 cli_runner.go:115] Run: docker container inspect -f "" multinode-20210310194323-6496
	* I0310 19:53:14.048777    3088 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\config.json ...
	* I0310 19:53:14.052908    3088 machine.go:88] provisioning docker machine ...
	* I0310 19:53:14.053572    3088 ubuntu.go:169] provisioning hostname "multinode-20210310194323-6496"
	* I0310 19:53:14.065366    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:14.572551    3088 main.go:121] libmachine: Using SSH client type: native
	* I0310 19:53:14.575546    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	* I0310 19:53:14.575747    3088 main.go:121] libmachine: About to run SSH command:
	* sudo hostname multinode-20210310194323-6496 && echo "multinode-20210310194323-6496" | sudo tee /etc/hostname
	* I0310 19:53:14.856181    3088 main.go:121] libmachine: SSH cmd err, output: <nil>: multinode-20210310194323-6496
	* 
	* I0310 19:53:14.863867    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:15.366157    3088 main.go:121] libmachine: Using SSH client type: native
	* I0310 19:53:15.366492    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	* I0310 19:53:15.366492    3088 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\smultinode-20210310194323-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210310194323-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 multinode-20210310194323-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 19:53:15.603102    3088 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 19:53:15.603102    3088 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 19:53:15.603102    3088 ubuntu.go:177] setting up certificates
	* I0310 19:53:15.603102    3088 provision.go:83] configureAuth start
	* I0310 19:53:15.616266    3088 cli_runner.go:115] Run: docker container inspect -f "" multinode-20210310194323-6496
	* I0310 19:53:16.087692    3088 provision.go:137] copyHostCerts
	* I0310 19:53:16.088374    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\key.pem -> C:\Users\jenkins\.minikube/key.pem
	* I0310 19:53:16.088991    3088 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 19:53:16.088991    3088 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 19:53:16.089466    3088 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 19:53:16.092727    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> C:\Users\jenkins\.minikube/ca.pem
	* I0310 19:53:16.093059    3088 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 19:53:16.093059    3088 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 19:53:16.093498    3088 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 19:53:16.097086    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\cert.pem -> C:\Users\jenkins\.minikube/cert.pem
	* I0310 19:53:16.097409    3088 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 19:53:16.097409    3088 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 19:53:16.097803    3088 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 19:53:16.101322    3088 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.multinode-20210310194323-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210310194323-6496]
	* I0310 19:53:16.546945    3088 provision.go:165] copyRemoteCerts
	* I0310 19:53:16.569677    3088 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 19:53:16.584994    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:17.039604    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	* I0310 19:53:17.184343    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server.pem -> /etc/docker/server.pem
	* I0310 19:53:17.185568    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1261 bytes)
	* I0310 19:53:17.236048    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	* I0310 19:53:17.236502    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 19:53:17.288357    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	* I0310 19:53:17.289452    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 19:53:17.343661    3088 provision.go:86] duration metric: configureAuth took 1.7405618s
	* I0310 19:53:17.343661    3088 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 19:53:17.353945    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:17.808748    3088 main.go:121] libmachine: Using SSH client type: native
	* I0310 19:53:17.809160    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	* I0310 19:53:17.809465    3088 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 19:53:18.029581    3088 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 19:53:18.029581    3088 ubuntu.go:71] root file system type: overlay
	* I0310 19:53:18.029971    3088 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 19:53:18.038079    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:18.532320    3088 main.go:121] libmachine: Using SSH client type: native
	* I0310 19:53:18.533204    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	* I0310 19:53:18.533490    3088 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 19:53:18.768030    3088 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 19:53:18.778054    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:19.267592    3088 main.go:121] libmachine: Using SSH client type: native
	* I0310 19:53:19.267977    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	* I0310 19:53:19.267977    3088 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 19:53:19.499069    3088 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 19:53:19.499283    3088 machine.go:91] provisioned docker machine in 5.446384s
	* I0310 19:53:19.499283    3088 start.go:267] post-start starting for "multinode-20210310194323-6496" (driver="docker")
	* I0310 19:53:19.499283    3088 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 19:53:19.512851    3088 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 19:53:19.522702    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:19.982498    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	* I0310 19:53:20.135431    3088 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 19:53:20.151414    3088 command_runner.go:124] > NAME="Ubuntu"
	* I0310 19:53:20.152300    3088 command_runner.go:124] > VERSION="20.04.1 LTS (Focal Fossa)"
	* I0310 19:53:20.152300    3088 command_runner.go:124] > ID=ubuntu
	* I0310 19:53:20.152300    3088 command_runner.go:124] > ID_LIKE=debian
	* I0310 19:53:20.152300    3088 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* I0310 19:53:20.152300    3088 command_runner.go:124] > VERSION_ID="20.04"
	* I0310 19:53:20.152300    3088 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	* I0310 19:53:20.152300    3088 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	* I0310 19:53:20.152300    3088 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	* I0310 19:53:20.152300    3088 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	* I0310 19:53:20.152300    3088 command_runner.go:124] > VERSION_CODENAME=focal
	* I0310 19:53:20.152300    3088 command_runner.go:124] > UBUNTU_CODENAME=focal
	* I0310 19:53:20.152671    3088 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 19:53:20.152671    3088 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 19:53:20.152671    3088 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 19:53:20.152671    3088 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 19:53:20.152671    3088 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 19:53:20.153042    3088 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 19:53:20.155817    3088 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 19:53:20.155817    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> /etc/test/nested/copy/2512/hosts
	* I0310 19:53:20.157097    3088 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 19:53:20.157097    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> /etc/test/nested/copy/4452/hosts
	* I0310 19:53:20.171870    3088 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 19:53:20.199848    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 19:53:20.255203    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 19:53:20.310037    3088 start.go:270] post-start completed in 810.7555ms
	* I0310 19:53:20.324263    3088 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 19:53:20.333096    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:20.810057    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	* I0310 19:53:20.959921    3088 command_runner.go:124] > 22%
	* I0310 19:53:20.959921    3088 fix.go:57] fixHost completed within 10.1761386s
	* I0310 19:53:20.959921    3088 start.go:80] releasing machines lock for "multinode-20210310194323-6496", held for 10.1761386s
	* I0310 19:53:20.967686    3088 cli_runner.go:115] Run: docker container inspect -f "" multinode-20210310194323-6496
	* I0310 19:53:21.447462    3088 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 19:53:21.457187    3088 ssh_runner.go:149] Run: systemctl --version
	* I0310 19:53:21.458347    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:21.465494    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:21.937385    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	* I0310 19:53:21.944494    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	* I0310 19:53:22.095359    3088 command_runner.go:124] > systemd 245 (245.4-4ubuntu3.4)
	* I0310 19:53:22.095899    3088 command_runner.go:124] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	* I0310 19:53:22.108547    3088 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 19:53:22.188280    3088 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	* I0310 19:53:22.189272    3088 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	* I0310 19:53:22.189272    3088 command_runner.go:124] > <H1>302 Moved</H1>
	* I0310 19:53:22.189272    3088 command_runner.go:124] > The document has moved
	* I0310 19:53:22.189272    3088 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	* I0310 19:53:22.189272    3088 command_runner.go:124] > </BODY></HTML>
	* I0310 19:53:22.209824    3088 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 19:53:22.246160    3088 command_runner.go:124] > # /lib/systemd/system/docker.service
	* I0310 19:53:22.247253    3088 command_runner.go:124] > [Unit]
	* I0310 19:53:22.247253    3088 command_runner.go:124] > Description=Docker Application Container Engine
	* I0310 19:53:22.247253    3088 command_runner.go:124] > Documentation=https://docs.docker.com
	* I0310 19:53:22.247253    3088 command_runner.go:124] > BindsTo=containerd.service
	* I0310 19:53:22.247253    3088 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	* I0310 19:53:22.247253    3088 command_runner.go:124] > Wants=network-online.target
	* I0310 19:53:22.247253    3088 command_runner.go:124] > Requires=docker.socket
	* I0310 19:53:22.247253    3088 command_runner.go:124] > StartLimitBurst=3
	* I0310 19:53:22.247253    3088 command_runner.go:124] > StartLimitIntervalSec=60
	* I0310 19:53:22.247253    3088 command_runner.go:124] > [Service]
	* I0310 19:53:22.247253    3088 command_runner.go:124] > Type=notify
	* I0310 19:53:22.247253    3088 command_runner.go:124] > Restart=on-failure
	* I0310 19:53:22.247390    3088 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* I0310 19:53:22.247390    3088 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* I0310 19:53:22.247390    3088 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	* I0310 19:53:22.247390    3088 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	* I0310 19:53:22.247390    3088 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* I0310 19:53:22.247390    3088 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	* I0310 19:53:22.247390    3088 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* I0310 19:53:22.247390    3088 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* I0310 19:53:22.247390    3088 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* I0310 19:53:22.247390    3088 command_runner.go:124] > ExecStart=
	* I0310 19:53:22.247717    3088 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* I0310 19:53:22.247717    3088 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	* I0310 19:53:22.247717    3088 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	* I0310 19:53:22.247717    3088 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	* I0310 19:53:22.247717    3088 command_runner.go:124] > LimitNOFILE=infinity
	* I0310 19:53:22.247717    3088 command_runner.go:124] > LimitNPROC=infinity
	* I0310 19:53:22.247717    3088 command_runner.go:124] > LimitCORE=infinity
	* I0310 19:53:22.247717    3088 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	* I0310 19:53:22.247717    3088 command_runner.go:124] > # Only systemd 226 and above support this version.
	* I0310 19:53:22.247717    3088 command_runner.go:124] > TasksMax=infinity
	* I0310 19:53:22.247717    3088 command_runner.go:124] > TimeoutStartSec=0
	* I0310 19:53:22.247717    3088 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	* I0310 19:53:22.247717    3088 command_runner.go:124] > Delegate=yes
	* I0310 19:53:22.247717    3088 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	* I0310 19:53:22.247717    3088 command_runner.go:124] > KillMode=process
	* I0310 19:53:22.247717    3088 command_runner.go:124] > [Install]
	* I0310 19:53:22.247717    3088 command_runner.go:124] > WantedBy=multi-user.target
	* I0310 19:53:22.248052    3088 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 19:53:22.258706    3088 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 19:53:22.298037    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 19:53:22.339651    3088 command_runner.go:124] > runtime-endpoint: unix:///var/run/dockershim.sock
	* I0310 19:53:22.339651    3088 command_runner.go:124] > image-endpoint: unix:///var/run/dockershim.sock
	* I0310 19:53:22.355029    3088 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 19:53:22.393868    3088 command_runner.go:124] > # /lib/systemd/system/docker.service
	* I0310 19:53:22.394213    3088 command_runner.go:124] > [Unit]
	* I0310 19:53:22.394213    3088 command_runner.go:124] > Description=Docker Application Container Engine
	* I0310 19:53:22.394213    3088 command_runner.go:124] > Documentation=https://docs.docker.com
	* I0310 19:53:22.394213    3088 command_runner.go:124] > BindsTo=containerd.service
	* I0310 19:53:22.394213    3088 command_runner.go:124] > After=network-online.target firewalld.service containerd.service
	* I0310 19:53:22.394213    3088 command_runner.go:124] > Wants=network-online.target
	* I0310 19:53:22.394213    3088 command_runner.go:124] > Requires=docker.socket
	* I0310 19:53:22.394213    3088 command_runner.go:124] > StartLimitBurst=3
	* I0310 19:53:22.394213    3088 command_runner.go:124] > StartLimitIntervalSec=60
	* I0310 19:53:22.394213    3088 command_runner.go:124] > [Service]
	* I0310 19:53:22.394213    3088 command_runner.go:124] > Type=notify
	* I0310 19:53:22.394213    3088 command_runner.go:124] > Restart=on-failure
	* I0310 19:53:22.394414    3088 command_runner.go:124] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* I0310 19:53:22.394414    3088 command_runner.go:124] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* I0310 19:53:22.394414    3088 command_runner.go:124] > # here is to clear out that command inherited from the base configuration. Without this,
	* I0310 19:53:22.394414    3088 command_runner.go:124] > # the command from the base configuration and the command specified here are treated as
	* I0310 19:53:22.394657    3088 command_runner.go:124] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* I0310 19:53:22.394657    3088 command_runner.go:124] > # will catch this invalid input and refuse to start the service with an error like:
	* I0310 19:53:22.394657    3088 command_runner.go:124] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* I0310 19:53:22.394657    3088 command_runner.go:124] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* I0310 19:53:22.394657    3088 command_runner.go:124] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* I0310 19:53:22.394657    3088 command_runner.go:124] > ExecStart=
	* I0310 19:53:22.394657    3088 command_runner.go:124] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* I0310 19:53:22.394657    3088 command_runner.go:124] > ExecReload=/bin/kill -s HUP $MAINPID
	* I0310 19:53:22.395254    3088 command_runner.go:124] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	* I0310 19:53:22.395254    3088 command_runner.go:124] > # in the kernel. We recommend using cgroups to do container-local accounting.
	* I0310 19:53:22.395254    3088 command_runner.go:124] > LimitNOFILE=infinity
	* I0310 19:53:22.395254    3088 command_runner.go:124] > LimitNPROC=infinity
	* I0310 19:53:22.395254    3088 command_runner.go:124] > LimitCORE=infinity
	* I0310 19:53:22.395254    3088 command_runner.go:124] > # Uncomment TasksMax if your systemd version supports it.
	* I0310 19:53:22.395529    3088 command_runner.go:124] > # Only systemd 226 and above support this version.
	* I0310 19:53:22.395529    3088 command_runner.go:124] > TasksMax=infinity
	* I0310 19:53:22.395529    3088 command_runner.go:124] > TimeoutStartSec=0
	* I0310 19:53:22.395529    3088 command_runner.go:124] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	* I0310 19:53:22.395529    3088 command_runner.go:124] > Delegate=yes
	* I0310 19:53:22.395529    3088 command_runner.go:124] > # kill only the docker process, not all processes in the cgroup
	* I0310 19:53:22.395529    3088 command_runner.go:124] > KillMode=process
	* I0310 19:53:22.395529    3088 command_runner.go:124] > [Install]
	* I0310 19:53:22.395529    3088 command_runner.go:124] > WantedBy=multi-user.target
	* I0310 19:53:22.411160    3088 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 19:53:22.609386    3088 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 19:53:22.651578    3088 ssh_runner.go:149] Run: docker version --format 
	* I0310 19:53:22.800206    3088 command_runner.go:124] > 20.10.3
	* I0310 19:53:22.804968    3088 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 19:53:22.813122    3088 cli_runner.go:115] Run: docker exec -t multinode-20210310194323-6496 dig +short host.docker.internal
	* I0310 19:53:23.523919    3088 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 19:53:23.536231    3088 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 19:53:23.556431    3088 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 19:53:23.600727    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:24.076480    3088 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 19:53:24.077030    3088 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 19:53:24.086261    3088 ssh_runner.go:149] Run: docker images --format :
	* I0310 19:53:24.196012    3088 command_runner.go:124] > kindest/kindnetd:v20210220-5b7e6d01
	* I0310 19:53:24.196012    3088 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.20.2
	* I0310 19:53:24.196012    3088 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.20.2
	* I0310 19:53:24.196012    3088 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.20.2
	* I0310 19:53:24.196012    3088 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.20.2
	* I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210105233232-2512
	* I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106002159-6856
	* I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106011107-6492
	* I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106215525-1984
	* I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107002220-9088
	* I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107190945-8748
	* I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210112045103-7160
	* I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210114204234-6692
	* I0310 19:53:24.196012    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115023213-8464
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115191024-3516
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210119220838-6552
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120022529-1140
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120175851-7432
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120214442-10992
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120231122-7024
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210123004019-5372
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210126212539-5172
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210128021318-232
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210212145109-352
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210213143925-7440
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219145454-9520
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219220622-3920
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210220004129-7452
	* I0310 19:53:24.196395    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210224014800-800
	* I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210225231842-5736
	* I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210301195830-5700
	* I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210303214129-4588
	* I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304002630-1156
	* I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304184021-4052
	* I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210306072141-12056
	* I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210308233820-5396
	* I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210309234032-4944
	* I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310083645-5040
	* I0310 19:53:24.196761    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310191609-6496
	* I0310 19:53:24.196761    3088 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	* I0310 19:53:24.196761    3088 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 19:53:24.196761    3088 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	* I0310 19:53:24.196761    3088 command_runner.go:124] > k8s.gcr.io/coredns:1.7.0
	* I0310 19:53:24.196761    3088 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	* I0310 19:53:24.196761    3088 command_runner.go:124] > k8s.gcr.io/pause:3.2
	* I0310 19:53:24.197031    3088 docker.go:423] Got preloaded images: -- stdout --
	* kindest/kindnetd:v20210220-5b7e6d01
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* minikube-local-cache-test:functional-20210105233232-2512
	* minikube-local-cache-test:functional-20210106002159-6856
	* minikube-local-cache-test:functional-20210106011107-6492
	* minikube-local-cache-test:functional-20210106215525-1984
	* minikube-local-cache-test:functional-20210107002220-9088
	* minikube-local-cache-test:functional-20210107190945-8748
	* minikube-local-cache-test:functional-20210112045103-7160
	* minikube-local-cache-test:functional-20210114204234-6692
	* minikube-local-cache-test:functional-20210115023213-8464
	* minikube-local-cache-test:functional-20210115191024-3516
	* minikube-local-cache-test:functional-20210119220838-6552
	* minikube-local-cache-test:functional-20210120022529-1140
	* minikube-local-cache-test:functional-20210120175851-7432
	* minikube-local-cache-test:functional-20210120214442-10992
	* minikube-local-cache-test:functional-20210120231122-7024
	* minikube-local-cache-test:functional-20210123004019-5372
	* minikube-local-cache-test:functional-20210126212539-5172
	* minikube-local-cache-test:functional-20210128021318-232
	* minikube-local-cache-test:functional-20210212145109-352
	* minikube-local-cache-test:functional-20210213143925-7440
	* minikube-local-cache-test:functional-20210219145454-9520
	* minikube-local-cache-test:functional-20210219220622-3920
	* minikube-local-cache-test:functional-20210220004129-7452
	* minikube-local-cache-test:functional-20210224014800-800
	* minikube-local-cache-test:functional-20210225231842-5736
	* minikube-local-cache-test:functional-20210301195830-5700
	* minikube-local-cache-test:functional-20210303214129-4588
	* minikube-local-cache-test:functional-20210304002630-1156
	* minikube-local-cache-test:functional-20210304184021-4052
	* minikube-local-cache-test:functional-20210306072141-12056
	* minikube-local-cache-test:functional-20210308233820-5396
	* minikube-local-cache-test:functional-20210309234032-4944
	* minikube-local-cache-test:functional-20210310083645-5040
	* minikube-local-cache-test:functional-20210310191609-6496
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 19:53:24.197031    3088 docker.go:360] Images already preloaded, skipping extraction
	* I0310 19:53:24.206905    3088 ssh_runner.go:149] Run: docker images --format :
	* I0310 19:53:24.306116    3088 command_runner.go:124] > kindest/kindnetd:v20210220-5b7e6d01
	* I0310 19:53:24.306116    3088 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.20.2
	* I0310 19:53:24.306116    3088 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.20.2
	* I0310 19:53:24.306116    3088 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.20.2
	* I0310 19:53:24.306116    3088 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.20.2
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210105233232-2512
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106002159-6856
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106011107-6492
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106215525-1984
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107002220-9088
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107190945-8748
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210112045103-7160
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210114204234-6692
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115023213-8464
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115191024-3516
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210119220838-6552
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120022529-1140
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120175851-7432
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120214442-10992
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120231122-7024
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210123004019-5372
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210126212539-5172
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210128021318-232
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210212145109-352
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210213143925-7440
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219145454-9520
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219220622-3920
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210220004129-7452
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210224014800-800
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210225231842-5736
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210301195830-5700
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210303214129-4588
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304002630-1156
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304184021-4052
	* I0310 19:53:24.306776    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210306072141-12056
	* I0310 19:53:24.307180    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210308233820-5396
	* I0310 19:53:24.307180    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210309234032-4944
	* I0310 19:53:24.307180    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310083645-5040
	* I0310 19:53:24.307180    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310191609-6496
	* I0310 19:53:24.307180    3088 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	* I0310 19:53:24.307180    3088 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 19:53:24.307180    3088 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	* I0310 19:53:24.307180    3088 command_runner.go:124] > k8s.gcr.io/coredns:1.7.0
	* I0310 19:53:24.307180    3088 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	* I0310 19:53:24.307180    3088 command_runner.go:124] > k8s.gcr.io/pause:3.2
	* I0310 19:53:24.320670    3088 docker.go:423] Got preloaded images: -- stdout --
	* kindest/kindnetd:v20210220-5b7e6d01
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* minikube-local-cache-test:functional-20210105233232-2512
	* minikube-local-cache-test:functional-20210106002159-6856
	* minikube-local-cache-test:functional-20210106011107-6492
	* minikube-local-cache-test:functional-20210106215525-1984
	* minikube-local-cache-test:functional-20210107002220-9088
	* minikube-local-cache-test:functional-20210107190945-8748
	* minikube-local-cache-test:functional-20210112045103-7160
	* minikube-local-cache-test:functional-20210114204234-6692
	* minikube-local-cache-test:functional-20210115023213-8464
	* minikube-local-cache-test:functional-20210115191024-3516
	* minikube-local-cache-test:functional-20210119220838-6552
	* minikube-local-cache-test:functional-20210120022529-1140
	* minikube-local-cache-test:functional-20210120175851-7432
	* minikube-local-cache-test:functional-20210120214442-10992
	* minikube-local-cache-test:functional-20210120231122-7024
	* minikube-local-cache-test:functional-20210123004019-5372
	* minikube-local-cache-test:functional-20210126212539-5172
	* minikube-local-cache-test:functional-20210128021318-232
	* minikube-local-cache-test:functional-20210212145109-352
	* minikube-local-cache-test:functional-20210213143925-7440
	* minikube-local-cache-test:functional-20210219145454-9520
	* minikube-local-cache-test:functional-20210219220622-3920
	* minikube-local-cache-test:functional-20210220004129-7452
	* minikube-local-cache-test:functional-20210224014800-800
	* minikube-local-cache-test:functional-20210225231842-5736
	* minikube-local-cache-test:functional-20210301195830-5700
	* minikube-local-cache-test:functional-20210303214129-4588
	* minikube-local-cache-test:functional-20210304002630-1156
	* minikube-local-cache-test:functional-20210304184021-4052
	* minikube-local-cache-test:functional-20210306072141-12056
	* minikube-local-cache-test:functional-20210308233820-5396
	* minikube-local-cache-test:functional-20210309234032-4944
	* minikube-local-cache-test:functional-20210310083645-5040
	* minikube-local-cache-test:functional-20210310191609-6496
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 19:53:24.321000    3088 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 19:53:24.328851    3088 ssh_runner.go:149] Run: docker info --format 
	* I0310 19:53:24.592335    3088 command_runner.go:124] > cgroupfs
	* I0310 19:53:24.592709    3088 cni.go:74] Creating CNI manager for ""
	* I0310 19:53:24.592709    3088 cni.go:136] 2 nodes found, recommending kindnet
	* I0310 19:53:24.592709    3088 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 19:53:24.592709    3088 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.97 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210310194323-6496 NodeName:multinode-20210310194323-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.97 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 19:53:24.593481    3088 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 192.168.49.97
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "multinode-20210310194323-6496"
	*   kubeletExtraArgs:
	*     node-ip: 192.168.49.97
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "192.168.49.97"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 19:53:24.593481    3088 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20210310194323-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.97
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:multinode-20210310194323-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	* I0310 19:53:24.607081    3088 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 19:53:24.635205    3088 command_runner.go:124] > kubeadm
	* I0310 19:53:24.635205    3088 command_runner.go:124] > kubectl
	* I0310 19:53:24.635205    3088 command_runner.go:124] > kubelet
	* I0310 19:53:24.635205    3088 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 19:53:24.644674    3088 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 19:53:24.675833    3088 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	* I0310 19:53:24.719667    3088 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 19:53:24.763789    3088 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1864 bytes)
	* I0310 19:53:24.828208    3088 ssh_runner.go:149] Run: grep 192.168.49.97	control-plane.minikube.internal$ /etc/hosts
	* I0310 19:53:24.845590    3088 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 19:53:24.884784    3088 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496 for IP: 192.168.49.97
	* I0310 19:53:24.885244    3088 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 19:53:24.885617    3088 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 19:53:24.886458    3088 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\client.key
	* I0310 19:53:24.886630    3088 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\apiserver.key.b6188fac
	* I0310 19:53:24.887102    3088 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\proxy-client.key
	* I0310 19:53:24.887102    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	* I0310 19:53:24.887323    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	* I0310 19:53:24.887323    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	* I0310 19:53:24.887622    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	* I0310 19:53:24.887802    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	* I0310 19:53:24.887802    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	* I0310 19:53:24.888084    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	* I0310 19:53:24.888277    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	* I0310 19:53:24.889211    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 19:53:24.889683    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.889683    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 19:53:24.890215    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.890215    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 19:53:24.890673    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.890880    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 19:53:24.891590    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.891590    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 19:53:24.892010    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.892010    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 19:53:24.892320    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.892320    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 19:53:24.892900    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.893182    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 19:53:24.893438    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.893728    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 19:53:24.893972    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.894276    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 19:53:24.895205    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.895205    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 19:53:24.895564    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.895896    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 19:53:24.896210    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.896210    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 19:53:24.896749    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.896749    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 19:53:24.897306    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.897306    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 19:53:24.897810    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.897810    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 19:53:24.898278    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.898520    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 19:53:24.898759    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.898995    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 19:53:24.898995    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.899652    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 19:53:24.899955    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.899955    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 19:53:24.900592    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.900770    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 19:53:24.900770    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.900770    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 19:53:24.900770    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.901741    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 19:53:24.901741    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.901741    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 19:53:24.901741    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.901741    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 19:53:24.902854    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.902854    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 19:53:24.903476    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.903713    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 19:53:24.903713    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.903713    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 19:53:24.903713    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.904652    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 19:53:24.904652    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.904652    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 19:53:24.904652    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.905626    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 19:53:24.905626    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.905626    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 19:53:24.905626    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.906633    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 19:53:24.906633    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.906633    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 19:53:24.906633    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.907627    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 19:53:24.907627    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.907627    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 19:53:24.907627    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.907627    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 19:53:24.908632    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.908632    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 19:53:24.908632    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.909629    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 19:53:24.909629    3088 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 19:53:24.909629    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 19:53:24.909629    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 19:53:24.910628    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 19:53:24.910628    3088 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 19:53:24.910628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\800.pem -> /usr/share/ca-certificates/800.pem
	* I0310 19:53:24.910628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1728.pem -> /usr/share/ca-certificates/1728.pem
	* I0310 19:53:24.910628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5396.pem -> /usr/share/ca-certificates/5396.pem
	* I0310 19:53:24.911636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5700.pem -> /usr/share/ca-certificates/5700.pem
	* I0310 19:53:24.911636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1156.pem -> /usr/share/ca-certificates/1156.pem
	* I0310 19:53:24.911636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1984.pem -> /usr/share/ca-certificates/1984.pem
	* I0310 19:53:24.911636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6496.pem -> /usr/share/ca-certificates/6496.pem
	* I0310 19:53:24.911636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3056.pem -> /usr/share/ca-certificates/3056.pem
	* I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3920.pem -> /usr/share/ca-certificates/3920.pem
	* I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7432.pem -> /usr/share/ca-certificates/7432.pem
	* I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1476.pem -> /usr/share/ca-certificates/1476.pem
	* I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4588.pem -> /usr/share/ca-certificates/4588.pem
	* I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8748.pem -> /usr/share/ca-certificates/8748.pem
	* I0310 19:53:24.912636    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7024.pem -> /usr/share/ca-certificates/7024.pem
	* I0310 19:53:24.913646    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9520.pem -> /usr/share/ca-certificates/9520.pem
	* I0310 19:53:24.913646    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\232.pem -> /usr/share/ca-certificates/232.pem
	* I0310 19:53:24.913646    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5736.pem -> /usr/share/ca-certificates/5736.pem
	* I0310 19:53:24.913646    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6368.pem -> /usr/share/ca-certificates/6368.pem
	* I0310 19:53:24.914672    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6492.pem -> /usr/share/ca-certificates/6492.pem
	* I0310 19:53:24.914672    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4944.pem -> /usr/share/ca-certificates/4944.pem
	* I0310 19:53:24.914672    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\10992.pem -> /usr/share/ca-certificates/10992.pem
	* I0310 19:53:24.914672    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5040.pem -> /usr/share/ca-certificates/5040.pem
	* I0310 19:53:24.914672    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6692.pem -> /usr/share/ca-certificates/6692.pem
	* I0310 19:53:24.915628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	* I0310 19:53:24.915628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6856.pem -> /usr/share/ca-certificates/6856.pem
	* I0310 19:53:24.915628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7452.pem -> /usr/share/ca-certificates/7452.pem
	* I0310 19:53:24.915628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\9088.pem -> /usr/share/ca-certificates/9088.pem
	* I0310 19:53:24.915628    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\3516.pem -> /usr/share/ca-certificates/3516.pem
	* I0310 19:53:24.916660    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5172.pem -> /usr/share/ca-certificates/5172.pem
	* I0310 19:53:24.916660    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\5372.pem -> /usr/share/ca-certificates/5372.pem
	* I0310 19:53:24.916660    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4452.pem -> /usr/share/ca-certificates/4452.pem
	* I0310 19:53:24.916660    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7160.pem -> /usr/share/ca-certificates/7160.pem
	* I0310 19:53:24.916660    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\1140.pem -> /usr/share/ca-certificates/1140.pem
	* I0310 19:53:24.917627    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\6552.pem -> /usr/share/ca-certificates/6552.pem
	* I0310 19:53:24.917627    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\2512.pem -> /usr/share/ca-certificates/2512.pem
	* I0310 19:53:24.917627    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\4052.pem -> /usr/share/ca-certificates/4052.pem
	* I0310 19:53:24.917627    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\8464.pem -> /usr/share/ca-certificates/8464.pem
	* I0310 19:53:24.917627    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\12056.pem -> /usr/share/ca-certificates/12056.pem
	* I0310 19:53:24.918675    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\352.pem -> /usr/share/ca-certificates/352.pem
	* I0310 19:53:24.918675    3088 vm_assets.go:96] NewFileAsset: C:\Users\jenkins\.minikube\certs\7440.pem -> /usr/share/ca-certificates/7440.pem
	* I0310 19:53:24.920679    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 19:53:24.976252    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	* I0310 19:53:25.030714    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 19:53:25.090889    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\multinode-20210310194323-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	* I0310 19:53:25.143662    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 19:53:25.195719    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 19:53:25.249996    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 19:53:25.306139    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 19:53:25.366256    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 19:53:25.417091    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 19:53:25.472798    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 19:53:25.527037    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 19:53:25.586023    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 19:53:25.642497    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 19:53:25.700669    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 19:53:25.756239    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 19:53:25.812767    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 19:53:25.866100    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 19:53:25.927018    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 19:53:25.987340    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 19:53:26.044823    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 19:53:26.103079    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 19:53:26.155668    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 19:53:26.205110    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 19:53:26.259414    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 19:53:26.315043    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 19:53:26.371061    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 19:53:26.427542    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 19:53:26.483918    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 19:53:26.537341    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 19:53:26.592746    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 19:53:26.649226    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 19:53:26.709111    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 19:53:26.764856    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 19:53:26.816753    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 19:53:26.870001    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 19:53:26.926996    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 19:53:26.985545    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 19:53:27.042516    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 19:53:27.108500    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 19:53:27.172842    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 19:53:27.232563    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 19:53:27.290692    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 19:53:27.351846    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 19:53:27.413092    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 19:53:27.474329    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 19:53:27.534146    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 19:53:27.592076    3088 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 19:53:27.647978    3088 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 19:53:27.702687    3088 ssh_runner.go:149] Run: openssl version
	* I0310 19:53:27.724265    3088 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	* I0310 19:53:27.737037    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 19:53:27.775300    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 19:53:27.792722    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 19:53:27.793198    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 19:53:27.803046    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 19:53:27.826379    3088 command_runner.go:124] > 51391683
	* I0310 19:53:27.839006    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:27.879137    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 19:53:27.920299    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 19:53:27.938016    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 19:53:27.939294    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 19:53:27.954163    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 19:53:27.978079    3088 command_runner.go:124] > 51391683
	* I0310 19:53:27.995374    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:28.038398    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 19:53:28.080310    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 19:53:28.100265    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 19:53:28.100745    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 19:53:28.110958    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 19:53:28.133399    3088 command_runner.go:124] > 51391683
	* I0310 19:53:28.145214    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:28.188647    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 19:53:28.230413    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 19:53:28.249994    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 19:53:28.250331    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 19:53:28.265754    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 19:53:28.285975    3088 command_runner.go:124] > 51391683
	* I0310 19:53:28.299397    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:28.337293    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 19:53:28.377081    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 19:53:28.394515    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 19:53:28.395460    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 19:53:28.405024    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 19:53:28.429848    3088 command_runner.go:124] > 51391683
	* I0310 19:53:28.448803    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:28.492300    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 19:53:28.533891    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 19:53:28.551985    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 19:53:28.552260    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 19:53:28.563528    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 19:53:28.587322    3088 command_runner.go:124] > 51391683
	* I0310 19:53:28.599691    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:28.642472    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 19:53:28.681519    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 19:53:28.701854    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 19:53:28.701954    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 19:53:28.716624    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 19:53:28.736743    3088 command_runner.go:124] > 51391683
	* I0310 19:53:28.752646    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:28.792525    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 19:53:28.832737    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 19:53:28.849614    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 19:53:28.850243    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 19:53:28.859938    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 19:53:28.879953    3088 command_runner.go:124] > 51391683
	* I0310 19:53:28.891584    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:28.931186    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 19:53:28.970613    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 19:53:28.988486    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 19:53:28.988486    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 19:53:28.999135    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 19:53:29.021219    3088 command_runner.go:124] > 51391683
	* I0310 19:53:29.034127    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:29.073296    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 19:53:29.117489    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 19:53:29.134913    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 19:53:29.134913    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 19:53:29.146465    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 19:53:29.166753    3088 command_runner.go:124] > 51391683
	* I0310 19:53:29.182352    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:29.221550    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 19:53:29.262496    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 19:53:29.283101    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 19:53:29.283253    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 19:53:29.297286    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 19:53:29.321290    3088 command_runner.go:124] > 51391683
	* I0310 19:53:29.332060    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:29.372815    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 19:53:29.409729    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 19:53:29.426375    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 19:53:29.426561    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 19:53:29.440369    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 19:53:29.463034    3088 command_runner.go:124] > 51391683
	* I0310 19:53:29.477954    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:29.521995    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 19:53:29.563528    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 19:53:29.581550    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 19:53:29.581981    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 19:53:29.593345    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 19:53:29.614782    3088 command_runner.go:124] > 51391683
	* I0310 19:53:29.626663    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:29.672964    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 19:53:29.712169    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 19:53:29.729107    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 19:53:29.729413    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 19:53:29.742828    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 19:53:29.764390    3088 command_runner.go:124] > 51391683
	* I0310 19:53:29.775816    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:29.818256    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 19:53:29.860546    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 19:53:29.878685    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 19:53:29.879128    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 19:53:29.900015    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 19:53:29.921583    3088 command_runner.go:124] > 51391683
	* I0310 19:53:29.935021    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:29.977981    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 19:53:30.022289    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 19:53:30.039893    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 19:53:30.039893    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 19:53:30.051911    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 19:53:30.078704    3088 command_runner.go:124] > 51391683
	* I0310 19:53:30.090715    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:30.132886    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 19:53:30.171601    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 19:53:30.192359    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 19:53:30.192558    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 19:53:30.203565    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 19:53:30.225881    3088 command_runner.go:124] > 51391683
	* I0310 19:53:30.237966    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:30.278673    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 19:53:30.330328    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 19:53:30.349395    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 19:53:30.349597    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 19:53:30.361108    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 19:53:30.386701    3088 command_runner.go:124] > 51391683
	* I0310 19:53:30.400909    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:30.436123    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 19:53:30.481181    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 19:53:30.500027    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 19:53:30.500027    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 19:53:30.512282    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 19:53:30.537141    3088 command_runner.go:124] > 51391683
	* I0310 19:53:30.550185    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:30.590843    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 19:53:30.632637    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 19:53:30.652767    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 19:53:30.652986    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 19:53:30.664846    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 19:53:30.685945    3088 command_runner.go:124] > 51391683
	* I0310 19:53:30.700110    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:30.739099    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 19:53:30.776911    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 19:53:30.794710    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 19:53:30.798295    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 19:53:30.808193    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 19:53:30.832099    3088 command_runner.go:124] > 51391683
	* I0310 19:53:30.842603    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:30.884503    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 19:53:30.929077    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 19:53:30.947901    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 19:53:30.948806    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 19:53:30.959057    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 19:53:30.980853    3088 command_runner.go:124] > 51391683
	* I0310 19:53:30.990965    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:31.039826    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 19:53:31.083153    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 19:53:31.098633    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 19:53:31.098633    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 19:53:31.116102    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 19:53:31.138147    3088 command_runner.go:124] > 51391683
	* I0310 19:53:31.148057    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:31.189149    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 19:53:31.230444    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 19:53:31.245000    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 19:53:31.245147    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 19:53:31.255624    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 19:53:31.277206    3088 command_runner.go:124] > 51391683
	* I0310 19:53:31.291386    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:31.332692    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 19:53:31.380496    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 19:53:31.397597    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 19:53:31.398593    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 19:53:31.418085    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 19:53:31.440027    3088 command_runner.go:124] > 51391683
	* I0310 19:53:31.450352    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:31.490940    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 19:53:31.528872    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 19:53:31.547184    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 19:53:31.547184    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 19:53:31.564382    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 19:53:31.588650    3088 command_runner.go:124] > 51391683
	* I0310 19:53:31.598825    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:31.648424    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 19:53:31.688369    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 19:53:31.705500    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 19:53:31.706605    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 19:53:31.716496    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 19:53:31.741813    3088 command_runner.go:124] > b5213941
	* I0310 19:53:31.762700    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 19:53:31.800019    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 19:53:31.840104    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 19:53:31.859057    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 19:53:31.859380    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 19:53:31.871926    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 19:53:31.896319    3088 command_runner.go:124] > 51391683
	* I0310 19:53:31.910897    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:31.954231    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 19:53:31.995551    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 19:53:32.016526    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 19:53:32.016735    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 19:53:32.029057    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 19:53:32.054770    3088 command_runner.go:124] > 51391683
	* I0310 19:53:32.067551    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:32.108098    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 19:53:32.148537    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 19:53:32.166565    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 19:53:32.166783    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 19:53:32.180000    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 19:53:32.202821    3088 command_runner.go:124] > 51391683
	* I0310 19:53:32.216716    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:32.257283    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 19:53:32.309864    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 19:53:32.328224    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 19:53:32.328673    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 19:53:32.340517    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 19:53:32.362812    3088 command_runner.go:124] > 51391683
	* I0310 19:53:32.380796    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:32.429333    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 19:53:32.471202    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 19:53:32.491168    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 19:53:32.491393    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 19:53:32.502899    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 19:53:32.525993    3088 command_runner.go:124] > 51391683
	* I0310 19:53:32.538056    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:32.583216    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 19:53:32.626873    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 19:53:32.647539    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 19:53:32.647694    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 19:53:32.659324    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 19:53:32.685902    3088 command_runner.go:124] > 51391683
	* I0310 19:53:32.697690    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:32.738938    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 19:53:32.782895    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 19:53:32.804782    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 19:53:32.804782    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 19:53:32.822137    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 19:53:32.843286    3088 command_runner.go:124] > 51391683
	* I0310 19:53:32.864680    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:32.909814    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 19:53:32.952428    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 19:53:32.973526    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 19:53:32.976658    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 19:53:32.989054    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 19:53:33.013611    3088 command_runner.go:124] > 51391683
	* I0310 19:53:33.029170    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:33.072306    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 19:53:33.117752    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 19:53:33.135066    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 19:53:33.135234    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 19:53:33.147118    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 19:53:33.166299    3088 command_runner.go:124] > 51391683
	* I0310 19:53:33.180107    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:33.221510    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 19:53:33.258889    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 19:53:33.277265    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 19:53:33.278091    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 19:53:33.288024    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 19:53:33.307036    3088 command_runner.go:124] > 51391683
	* I0310 19:53:33.318675    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:33.359655    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 19:53:33.401496    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 19:53:33.419050    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 19:53:33.419485    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 19:53:33.429816    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 19:53:33.454397    3088 command_runner.go:124] > 51391683
	* I0310 19:53:33.465565    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:33.500264    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 19:53:33.550915    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 19:53:33.567718    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 19:53:33.567718    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 19:53:33.580121    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 19:53:33.604521    3088 command_runner.go:124] > 51391683
	* I0310 19:53:33.615589    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:33.654330    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 19:53:33.692439    3088 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 19:53:33.713393    3088 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 19:53:33.714279    3088 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 19:53:33.727104    3088 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 19:53:33.747826    3088 command_runner.go:124] > 51391683
	* I0310 19:53:33.759389    3088 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 19:53:33.786157    3088 kubeadm.go:385] StartCluster: {Name:multinode-20210310194323-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:multinode-20210310194323-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.2 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
	* I0310 19:53:33.795461    3088 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 19:53:33.916648    3088 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 19:53:33.943830    3088 command_runner.go:124] > /var/lib/kubelet/config.yaml
	* I0310 19:53:33.944467    3088 command_runner.go:124] > /var/lib/kubelet/kubeadm-flags.env
	* I0310 19:53:33.944467    3088 command_runner.go:124] > /var/lib/minikube/etcd:
	* I0310 19:53:33.944467    3088 command_runner.go:124] > member
	* I0310 19:53:33.945801    3088 kubeadm.go:396] found existing configuration files, will attempt cluster restart
	* I0310 19:53:33.945801    3088 kubeadm.go:594] restartCluster start
	* I0310 19:53:33.956289    3088 ssh_runner.go:149] Run: sudo test -d /data/minikube
	* I0310 19:53:33.981176    3088 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 19:53:33.996052    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:53:34.494307    3088 kubeconfig.go:117] verify returned: extract IP: "multinode-20210310194323-6496" does not appear in C:\Users\jenkins/.kube/config
	* I0310 19:53:34.494999    3088 kubeconfig.go:128] "multinode-20210310194323-6496" context is missing from C:\Users\jenkins/.kube/config - will repair!
	* I0310 19:53:34.495869    3088 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 19:53:34.511090    3088 kapi.go:59] client config for multinode-20210310194323-6496: &rest.Config{Host:"https://127.0.0.1:55051", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496/client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496/client.key", CAFile:"C:\\Users\\jenkins\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCom
pression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	* I0310 19:53:34.537023    3088 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	* I0310 19:53:34.565220    3088 api_server.go:146] Checking apiserver status ...
	* I0310 19:53:34.581022    3088 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* W0310 19:53:34.620120    3088 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 19:53:34.620120    3088 kubeadm.go:573] needs reconfigure: apiserver in state Stopped
	* I0310 19:53:34.620120    3088 kubeadm.go:1042] stopping kube-system containers ...
	* I0310 19:53:34.629216    3088 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 19:53:34.739369    3088 command_runner.go:124] > 7ce422d22c30
	* I0310 19:53:34.739369    3088 command_runner.go:124] > cf643b2bb13a
	* I0310 19:53:34.739369    3088 command_runner.go:124] > 8e25ad7ac878
	* I0310 19:53:34.739369    3088 command_runner.go:124] > b6d8885bf62c
	* I0310 19:53:34.739369    3088 command_runner.go:124] > 5dd905038955
	* I0310 19:53:34.739369    3088 command_runner.go:124] > b285d1fca513
	* I0310 19:53:34.739369    3088 command_runner.go:124] > af5652213b63
	* I0310 19:53:34.739369    3088 command_runner.go:124] > 696c45777987
	* I0310 19:53:34.739369    3088 command_runner.go:124] > c9e9e409c8d1
	* I0310 19:53:34.739552    3088 command_runner.go:124] > 90d50f811eb4
	* I0310 19:53:34.739552    3088 command_runner.go:124] > 11af52e50d91
	* I0310 19:53:34.739552    3088 command_runner.go:124] > 5e3898b62288
	* I0310 19:53:34.739552    3088 command_runner.go:124] > fcdd27401671
	* I0310 19:53:34.739552    3088 command_runner.go:124] > b19167915dae
	* I0310 19:53:34.739552    3088 command_runner.go:124] > 904c19c6b486
	* I0310 19:53:34.739552    3088 command_runner.go:124] > 62c13f20a591
	* I0310 19:53:34.739552    3088 docker.go:261] Stopping containers: [7ce422d22c30 cf643b2bb13a 8e25ad7ac878 b6d8885bf62c 5dd905038955 b285d1fca513 af5652213b63 696c45777987 c9e9e409c8d1 90d50f811eb4 11af52e50d91 5e3898b62288 fcdd27401671 b19167915dae 904c19c6b486 62c13f20a591]
	* I0310 19:53:34.749078    3088 ssh_runner.go:149] Run: docker stop 7ce422d22c30 cf643b2bb13a 8e25ad7ac878 b6d8885bf62c 5dd905038955 b285d1fca513 af5652213b63 696c45777987 c9e9e409c8d1 90d50f811eb4 11af52e50d91 5e3898b62288 fcdd27401671 b19167915dae 904c19c6b486 62c13f20a591
	* I0310 19:53:34.854281    3088 command_runner.go:124] > 7ce422d22c30
	* I0310 19:53:34.855189    3088 command_runner.go:124] > cf643b2bb13a
	* I0310 19:53:34.855189    3088 command_runner.go:124] > 8e25ad7ac878
	* I0310 19:53:34.858489    3088 command_runner.go:124] > b6d8885bf62c
	* I0310 19:53:34.858489    3088 command_runner.go:124] > 5dd905038955
	* I0310 19:53:34.858489    3088 command_runner.go:124] > b285d1fca513
	* I0310 19:53:34.858489    3088 command_runner.go:124] > af5652213b63
	* I0310 19:53:34.858489    3088 command_runner.go:124] > 696c45777987
	* I0310 19:53:34.858489    3088 command_runner.go:124] > c9e9e409c8d1
	* I0310 19:53:34.858489    3088 command_runner.go:124] > 90d50f811eb4
	* I0310 19:53:34.858489    3088 command_runner.go:124] > 11af52e50d91
	* I0310 19:53:34.858489    3088 command_runner.go:124] > 5e3898b62288
	* I0310 19:53:34.858489    3088 command_runner.go:124] > fcdd27401671
	* I0310 19:53:34.858489    3088 command_runner.go:124] > b19167915dae
	* I0310 19:53:34.859180    3088 command_runner.go:124] > 904c19c6b486
	* I0310 19:53:34.859180    3088 command_runner.go:124] > 62c13f20a591
	* I0310 19:53:34.881991    3088 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	* I0310 19:53:34.929292    3088 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 19:53:34.957088    3088 command_runner.go:124] > -rw------- 1 root root 5611 Mar 10 19:45 /etc/kubernetes/admin.conf
	* I0310 19:53:34.957088    3088 command_runner.go:124] > -rw------- 1 root root 5629 Mar 10 19:45 /etc/kubernetes/controller-manager.conf
	* I0310 19:53:34.957088    3088 command_runner.go:124] > -rw------- 1 root root 2055 Mar 10 19:45 /etc/kubernetes/kubelet.conf
	* I0310 19:53:34.957088    3088 command_runner.go:124] > -rw------- 1 root root 5581 Mar 10 19:45 /etc/kubernetes/scheduler.conf
	* I0310 19:53:34.957088    3088 kubeadm.go:153] found existing configuration files:
	* -rw------- 1 root root 5611 Mar 10 19:45 /etc/kubernetes/admin.conf
	* -rw------- 1 root root 5629 Mar 10 19:45 /etc/kubernetes/controller-manager.conf
	* -rw------- 1 root root 2055 Mar 10 19:45 /etc/kubernetes/kubelet.conf
	* -rw------- 1 root root 5581 Mar 10 19:45 /etc/kubernetes/scheduler.conf
	* 
	* I0310 19:53:34.970647    3088 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	* I0310 19:53:34.995392    3088 command_runner.go:124] >     server: https://control-plane.minikube.internal:8443
	* I0310 19:53:35.007180    3088 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	* I0310 19:53:35.036298    3088 command_runner.go:124] >     server: https://control-plane.minikube.internal:8443
	* I0310 19:53:35.046407    3088 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	* I0310 19:53:35.074094    3088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 19:53:35.091021    3088 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	* I0310 19:53:35.133408    3088 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	* I0310 19:53:35.162089    3088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 19:53:35.179390    3088 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	* I0310 19:53:35.222728    3088 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 19:53:35.245097    3088 kubeadm.go:670] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	* I0310 19:53:35.246321    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 19:53:35.710539    3088 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	* I0310 19:53:35.710539    3088 command_runner.go:124] > [certs] Using existing ca certificate authority
	* I0310 19:53:35.710539    3088 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	* I0310 19:53:35.710539    3088 command_runner.go:124] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	* I0310 19:53:35.710539    3088 command_runner.go:124] > [certs] Using existing front-proxy-ca certificate authority
	* I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing front-proxy-client certificate and key on disk
	* I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing etcd/ca certificate authority
	* I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing etcd/server certificate and key on disk
	* I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing etcd/peer certificate and key on disk
	* I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	* I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	* I0310 19:53:35.711067    3088 command_runner.go:124] > [certs] Using the existing "sa" key
	* I0310 19:53:35.711067    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 19:53:36.073538    3088 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	* I0310 19:53:36.687590    3088 command_runner.go:124] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	* I0310 19:53:37.077224    3088 command_runner.go:124] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	* I0310 19:53:37.602733    3088 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	* I0310 19:53:37.856428    3088 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	* I0310 19:53:37.867781    3088 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.1567171s)
	* I0310 19:53:37.868038    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 19:53:38.434467    3088 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	* I0310 19:53:38.435215    3088 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	* I0310 19:53:38.435215    3088 command_runner.go:124] > [kubelet-start] Starting the kubelet
	* I0310 19:53:38.435215    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 19:53:39.073688    3088 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	* I0310 19:53:39.074415    3088 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	* I0310 19:53:39.134602    3088 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	* I0310 19:53:39.140110    3088 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	* I0310 19:53:39.155561    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 19:53:39.846464    3088 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	* I0310 19:53:39.871172    3088 kubeadm.go:687] waiting for restarted kubelet to initialise ...
	* I0310 19:53:39.881279    3088 retry.go:31] will retry after 276.165072ms: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	* I0310 19:53:40.164207    3088 retry.go:31] will retry after 540.190908ms: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	* I0310 19:53:40.712965    3088 retry.go:31] will retry after 655.06503ms: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	* I0310 19:53:41.375995    3088 retry.go:31] will retry after 791.196345ms: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	* I0310 19:53:42.178996    3088 retry.go:31] will retry after 1.170244332s: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	* I0310 19:53:43.365071    3088 retry.go:31] will retry after 2.253109428s: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	* I0310 19:53:45.627121    3088 retry.go:31] will retry after 1.610739793s: Get "https://127.0.0.1:55051/api/v1/namespaces/kube-system/pods": EOF
	* I0310 19:53:57.369700    3088 retry.go:31] will retry after 2.804311738s: kubelet not initialised
	* I0310 19:54:00.199422    3088 retry.go:31] will retry after 3.824918958s: kubelet not initialised
	* I0310 19:54:04.060727    3088 retry.go:31] will retry after 7.69743562s: kubelet not initialised
	* I0310 19:54:11.798821    3088 retry.go:31] will retry after 14.635568968s: kubelet not initialised
	* I0310 19:54:26.478511    3088 retry.go:31] will retry after 28.406662371s: kubelet not initialised
	* I0310 19:54:54.917365    3088 kubeadm.go:704] kubelet initialised
	* I0310 19:54:54.917658    3088 kubeadm.go:705] duration metric: took 1m15.046304s waiting for restarted kubelet to initialise ...
	* I0310 19:54:54.917658    3088 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	* I0310 19:54:54.917658    3088 pod_ready.go:59] waiting 4m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:54.949377    3088 pod_ready.go:97] pod "coredns-74ff55c5b-jq4n9" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:13 +0000 GMT Reason: Message:}
	* I0310 19:54:54.949377    3088 pod_ready.go:62] duration metric: took 31.7186ms to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	* I0310 19:54:54.949377    3088 pod_ready.go:59] waiting 4m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:54.986628    3088 pod_ready.go:97] pod "etcd-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:07 +0000 GMT Reason: Message:}
	* I0310 19:54:54.986628    3088 pod_ready.go:62] duration metric: took 37.2508ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	* I0310 19:54:54.986628    3088 pod_ready.go:59] waiting 4m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:55.015085    3088 pod_ready.go:97] pod "kube-apiserver-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:08 +0000 GMT Reason: Message:}
	* I0310 19:54:55.015085    3088 pod_ready.go:62] duration metric: took 28.4576ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	* I0310 19:54:55.015085    3088 pod_ready.go:59] waiting 4m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:55.046031    3088 pod_ready.go:97] pod "kube-controller-manager-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:07 +0000 GMT Reason: Message:}
	* I0310 19:54:55.046031    3088 pod_ready.go:62] duration metric: took 30.9458ms to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	* I0310 19:54:55.046031    3088 pod_ready.go:59] waiting 4m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:55.074027    3088 pod_ready.go:97] pod "kube-proxy-7rchm" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:12 +0000 GMT Reason: Message:}
	* I0310 19:54:55.074027    3088 pod_ready.go:62] duration metric: took 27.9962ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	* I0310 19:54:55.074027    3088 pod_ready.go:59] waiting 4m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:55.104946    3088 pod_ready.go:97] pod "kube-scheduler-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:47:09 +0000 GMT Reason: Message:}
	* I0310 19:54:55.104946    3088 pod_ready.go:62] duration metric: took 30.919ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	* I0310 19:54:55.104946    3088 pod_ready.go:39] duration metric: took 187.288ms for extra waiting for kube-system core pods to be Ready ...
	* I0310 19:54:55.104946    3088 api_server.go:48] waiting for apiserver process to appear ...
	* I0310 19:54:55.120112    3088 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 19:54:55.190014    3088 command_runner.go:124] > 2555
	* I0310 19:54:55.190278    3088 api_server.go:68] duration metric: took 85.3324ms to wait for apiserver process to appear ...
	* I0310 19:54:55.190278    3088 api_server.go:84] waiting for apiserver healthz status ...
	* I0310 19:54:55.190278    3088 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55051/healthz ...
	* I0310 19:54:55.218950    3088 api_server.go:241] https://127.0.0.1:55051/healthz returned 200:
	* ok
	* I0310 19:54:55.225610    3088 api_server.go:137] control plane version: v1.20.2
	* I0310 19:54:55.225787    3088 api_server.go:127] duration metric: took 35.5091ms to wait for apiserver health ...
	* I0310 19:54:55.226476    3088 cni.go:74] Creating CNI manager for ""
	* I0310 19:54:55.226760    3088 cni.go:136] 2 nodes found, recommending kindnet
	* I0310 19:54:55.230581    3088 out.go:129] * Configuring CNI (Container Networking Interface) ...
	* I0310 19:54:55.242870    3088 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	* I0310 19:54:55.265448    3088 command_runner.go:124] >   File: /opt/cni/bin/portmap
	* I0310 19:54:55.265448    3088 command_runner.go:124] >   Size: 2738488   	Blocks: 5352       IO Block: 4096   regular file
	* I0310 19:54:55.265448    3088 command_runner.go:124] > Device: 72h/114d	Inode: 527034      Links: 1
	* I0310 19:54:55.265448    3088 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	* I0310 19:54:55.265448    3088 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	* I0310 19:54:55.265448    3088 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	* I0310 19:54:55.265448    3088 command_runner.go:124] > Change: 2021-03-01 19:44:53.130616000 +0000
	* I0310 19:54:55.265448    3088 command_runner.go:124] >  Birth: -
	* I0310 19:54:55.265448    3088 cni.go:160] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
	* I0310 19:54:55.265868    3088 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2298 bytes)
	* I0310 19:54:55.325572    3088 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	* I0310 19:54:56.005566    3088 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	* I0310 19:54:56.005865    3088 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	* I0310 19:54:56.005865    3088 command_runner.go:124] > serviceaccount/kindnet unchanged
	* I0310 19:54:56.005865    3088 command_runner.go:124] > daemonset.apps/kindnet configured
	* I0310 19:54:56.006149    3088 system_pods.go:41] waiting for kube-system pods to appear ...
	* I0310 19:54:56.035927    3088 system_pods.go:57] 12 kube-system pods found
	* I0310 19:54:56.036074    3088 system_pods.go:59] "coredns-74ff55c5b-jq4n9" [59fcc5d5-1d12-409a-88d8-46674adeb0e7] Running
	* I0310 19:54:56.036074    3088 system_pods.go:59] "etcd-multinode-20210310194323-6496" [7355a92f-158f-4d8e-888d-9fe97a766922] Running
	* I0310 19:54:56.036223    3088 system_pods.go:59] "kindnet-pdlkw" [bdcc23df-7069-4a7a-8cdc-89b12e006bf6] Running
	* I0310 19:54:56.036223    3088 system_pods.go:59] "kindnet-vvk6s" [dba33385-2929-47cf-a14a-869967740392] Running
	* I0310 19:54:56.036223    3088 system_pods.go:59] "kindnet-xn5hd" [41dfeb11-7af6-449b-999c-04fb65d2ba9d] Running
	* I0310 19:54:56.036223    3088 system_pods.go:59] "kube-apiserver-multinode-20210310194323-6496" [9c82174a-7835-4268-832f-b5d33ee4ed77] Running
	* I0310 19:54:56.036223    3088 system_pods.go:59] "kube-controller-manager-multinode-20210310194323-6496" [052eef6a-337b-4476-9681-5695f0e3ee90] Running
	* I0310 19:54:56.036223    3088 system_pods.go:59] "kube-proxy-7rchm" [6247bab9-80ef-438a-806a-0c19ed9c39a2] Running
	* I0310 19:54:56.036223    3088 system_pods.go:59] "kube-proxy-gjbjj" [af273b96-644c-4e71-82d0-b375b373a1df] Running
	* I0310 19:54:56.036223    3088 system_pods.go:59] "kube-proxy-tdzlb" [d613357b-ba23-4106-8b5e-a32483597686] Running
	* I0310 19:54:56.036223    3088 system_pods.go:59] "kube-scheduler-multinode-20210310194323-6496" [adc66c6d-e5b0-4c6b-b548-febdfb7a55fb] Running
	* I0310 19:54:56.036223    3088 system_pods.go:59] "storage-provisioner" [75d9e0a4-c70e-445c-af14-4db9ef305719] Running
	* I0310 19:54:56.036223    3088 system_pods.go:72] duration metric: took 30.0741ms to wait for pod list to return data ...
	* I0310 19:54:56.036366    3088 node_conditions.go:101] verifying NodePressure condition ...
	* I0310 19:54:56.048570    3088 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	* I0310 19:54:56.048570    3088 node_conditions.go:122] node cpu capacity is 4
	* I0310 19:54:56.048570    3088 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	* I0310 19:54:56.048570    3088 node_conditions.go:122] node cpu capacity is 4
	* I0310 19:54:56.048570    3088 node_conditions.go:104] duration metric: took 12.204ms to run NodePressure ...
	* I0310 19:54:56.048570    3088 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 19:54:57.137293    3088 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	* I0310 19:54:57.137293    3088 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	* I0310 19:54:57.137293    3088 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.0887239s)
	* I0310 19:54:57.137293    3088 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	* I0310 19:54:57.193600    3088 command_runner.go:124] > -16
	* I0310 19:54:57.193600    3088 ops.go:34] apiserver oom_adj: -16
	* I0310 19:54:57.193600    3088 kubeadm.go:598] restartCluster took 1m23.2479221s
	* I0310 19:54:57.193600    3088 kubeadm.go:387] StartCluster complete in 1m23.4075661s
	* I0310 19:54:57.194410    3088 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 19:54:57.194410    3088 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	* I0310 19:54:57.196797    3088 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 19:54:57.223839    3088 kapi.go:59] client config for multinode-20210310194323-6496: &rest.Config{Host:"https://127.0.0.1:55051", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", Disable
Compression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	* I0310 19:54:57.250899    3088 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "multinode-20210310194323-6496" rescaled to 1
	* I0310 19:54:57.251266    3088 start.go:203] Will wait 6m0s for node up to 
	* I0310 19:54:57.251266    3088 addons.go:381] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	* I0310 19:54:57.251266    3088 addons.go:58] Setting storage-provisioner=true in profile "multinode-20210310194323-6496"
	* I0310 19:54:57.251266    3088 addons.go:134] Setting addon storage-provisioner=true in "multinode-20210310194323-6496"
	* I0310 19:54:57.251266    3088 addons.go:58] Setting default-storageclass=true in profile "multinode-20210310194323-6496"
	* W0310 19:54:57.251266    3088 addons.go:143] addon storage-provisioner should already be in state true
	* I0310 19:54:57.251695    3088 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210310194323-6496"
	* I0310 19:54:57.259893    3088 out.go:129] * Verifying Kubernetes components...
	* I0310 19:54:57.254150    3088 host.go:66] Checking if "multinode-20210310194323-6496" exists ...
	* I0310 19:54:57.254587    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 19:54:57.254587    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 19:54:57.254587    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 19:54:57.254587    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 19:54:57.254587    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 19:54:57.254713    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 19:54:57.254881    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 19:54:57.254881    3088 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 19:54:57.341091    3088 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	* I0310 19:54:57.351534    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format=
	* I0310 19:54:57.353978    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format=
	* I0310 19:54:57.465919    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:54:58.184147    3088 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.185705    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	* I0310 19:54:58.185892    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 897.8442ms
	* I0310 19:54:58.185892    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	* I0310 19:54:58.209804    3088 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.210658    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	* I0310 19:54:58.211539    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 950.7734ms
	* I0310 19:54:58.211539    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	* I0310 19:54:58.211945    3088 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.213152    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	* I0310 19:54:58.213613    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 952.847ms
	* I0310 19:54:58.214073    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	* I0310 19:54:58.263016    3088 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.266775    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	* I0310 19:54:58.267565    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.0059309s
	* I0310 19:54:58.267565    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	* I0310 19:54:58.274245    3088 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.275359    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	* I0310 19:54:58.278236    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 987.6861ms
	* I0310 19:54:58.278236    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	* I0310 19:54:58.287932    3088 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.288620    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	* I0310 19:54:58.289427    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 989.624ms
	* I0310 19:54:58.289556    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	* I0310 19:54:58.316074    3088 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.317201    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	* I0310 19:54:58.317657    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.046645s
	* I0310 19:54:58.317657    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	* I0310 19:54:58.321621    3088 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.322919    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	* I0310 19:54:58.323421    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.0199259s
	* I0310 19:54:58.323421    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	* I0310 19:54:58.333568    3088 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.334206    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	* I0310 19:54:58.334206    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.0510521s
	* I0310 19:54:58.334206    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	* I0310 19:54:58.378444    3088 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.378898    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	* I0310 19:54:58.380483    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.0856281s
	* I0310 19:54:58.380888    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	* I0310 19:54:58.388250    3088 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.389329    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	* I0310 19:54:58.390200    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.1294341s
	* I0310 19:54:58.390200    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	* I0310 19:54:58.408071    3088 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.408334    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	* I0310 19:54:58.408794    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.0902696s
	* I0310 19:54:58.408995    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	* I0310 19:54:58.434499    3088 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.435399    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	* I0310 19:54:58.435600    3088 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.436020    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.1743736s
	* I0310 19:54:58.436020    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	* I0310 19:54:58.436392    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	* I0310 19:54:58.436615    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.1582992s
	* I0310 19:54:58.436615    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	* I0310 19:54:58.444160    3088 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.444160    3088 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.444335    3088 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.444706    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	* I0310 19:54:58.444706    3088 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.444929    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	* I0310 19:54:58.444706    3088 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.445146    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	* I0310 19:54:58.445423    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.1412791s
	* I0310 19:54:58.445423    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	* I0310 19:54:58.445423    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	* I0310 19:54:58.445618    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.1827136s
	* I0310 19:54:58.445618    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	* I0310 19:54:58.445928    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	* I0310 19:54:58.445928    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.179353s
	* I0310 19:54:58.445928    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	* I0310 19:54:58.445928    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.1814267s
	* I0310 19:54:58.445928    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	* I0310 19:54:58.446547    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.1485712s
	* I0310 19:54:58.446547    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	* I0310 19:54:58.459102    3088 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.459719    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	* I0310 19:54:58.460203    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.199717s
	* I0310 19:54:58.460203    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	* I0310 19:54:58.460203    3088 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.460674    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	* I0310 19:54:58.460947    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.1643722s
	* I0310 19:54:58.460947    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	* I0310 19:54:58.464130    3088 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.464717    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	* I0310 19:54:58.464717    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.1705676s
	* I0310 19:54:58.465186    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	* I0310 19:54:58.522095    3088 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.523533    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	* I0310 19:54:58.524089    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.2157861s
	* I0310 19:54:58.524089    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	* I0310 19:54:58.538440    3088 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.538440    3088 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.538959    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	* I0310 19:54:58.539133    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	* I0310 19:54:58.539444    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 1.2778104s
	* I0310 19:54:58.539444    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	* I0310 19:54:58.539444    3088 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.539728    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.2792419s
	* I0310 19:54:58.539728    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	* I0310 19:54:58.539967    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	* I0310 19:54:58.540484    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.2215128s
	* I0310 19:54:58.540484    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	* I0310 19:54:58.562632    3088 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.563553    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	* I0310 19:54:58.563553    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.2508073s
	* I0310 19:54:58.563553    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	* I0310 19:54:58.564209    3088 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.564705    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	* I0310 19:54:58.565767    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.2661886s
	* I0310 19:54:58.565767    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	* I0310 19:54:58.569885    3088 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.570056    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	* I0310 19:54:58.570056    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.2542729s
	* I0310 19:54:58.570056    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	* I0310 19:54:58.574084    3088 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.574956    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	* I0310 19:54:58.574956    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.2541845s
	* I0310 19:54:58.574956    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	* I0310 19:54:58.575739    3088 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.575999    3088 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.576177    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	* I0310 19:54:58.576555    3088 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.576802    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	* I0310 19:54:58.576802    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.28451s
	* I0310 19:54:58.576802    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	* I0310 19:54:58.576802    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.2710939s
	* I0310 19:54:58.577031    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	* I0310 19:54:58.577031    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	* I0310 19:54:58.577893    3088 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 19:54:58.577893    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.2561315s
	* I0310 19:54:58.578110    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	* I0310 19:54:58.578353    3088 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	* I0310 19:54:58.578966    3088 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.2859624s
	* I0310 19:54:58.579182    3088 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	* I0310 19:54:58.579182    3088 cache.go:73] Successfully saved all images to host disk.
	* I0310 19:54:58.609758    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format=
	* I0310 19:54:58.858754    3088 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496: (1.392837s)
	* I0310 19:54:58.859742    3088 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	* I0310 19:54:58.860283    3088 pod_ready.go:59] waiting 6m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:58.873728    3088 cli_runner.go:168] Completed: docker container inspect multinode-20210310194323-6496 --format=: (1.521417s)
	* I0310 19:54:58.877139    3088 kapi.go:59] client config for multinode-20210310194323-6496: &rest.Config{Host:"https://127.0.0.1:55051", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496\\client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\multinode-20210310194323-6496\\client.key", CAFile:"C:\\Users\\jenkins\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", Disable
Compression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	* I0310 19:54:58.906110    3088 cli_runner.go:168] Completed: docker container inspect multinode-20210310194323-6496 --format=: (1.5517271s)
	* I0310 19:54:58.909116    3088 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 19:54:58.909116    3088 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	* I0310 19:54:58.909116    3088 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	* I0310 19:54:58.917128    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:54:58.930129    3088 pod_ready.go:97] pod "coredns-74ff55c5b-jq4n9" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:13 +0000 GMT Reason: Message:}
	* I0310 19:54:58.930129    3088 pod_ready.go:62] duration metric: took 69.8464ms to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	* I0310 19:54:58.930129    3088 pod_ready.go:59] waiting 6m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:58.966343    3088 addons.go:134] Setting addon default-storageclass=true in "multinode-20210310194323-6496"
	* W0310 19:54:58.966343    3088 addons.go:143] addon default-storageclass should already be in state true
	* I0310 19:54:58.966978    3088 host.go:66] Checking if "multinode-20210310194323-6496" exists ...
	* I0310 19:54:58.986781    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format=
	* I0310 19:54:58.995788    3088 pod_ready.go:97] pod "etcd-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:07 +0000 GMT Reason: Message:}
	* I0310 19:54:58.995788    3088 pod_ready.go:62] duration metric: took 65.6588ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	* I0310 19:54:58.995788    3088 pod_ready.go:59] waiting 6m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:59.058686    3088 pod_ready.go:97] pod "kube-apiserver-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:08 +0000 GMT Reason: Message:}
	* I0310 19:54:59.059029    3088 pod_ready.go:62] duration metric: took 63.241ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	* I0310 19:54:59.059029    3088 pod_ready.go:59] waiting 6m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:59.111541    3088 pod_ready.go:97] pod "kube-controller-manager-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:07 +0000 GMT Reason: Message:}
	* I0310 19:54:59.111985    3088 pod_ready.go:62] duration metric: took 52.956ms to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	* I0310 19:54:59.111985    3088 pod_ready.go:59] waiting 6m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:59.155983    3088 pod_ready.go:97] pod "kube-proxy-7rchm" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:54:12 +0000 GMT Reason: Message:}
	* I0310 19:54:59.155983    3088 pod_ready.go:62] duration metric: took 43.9981ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	* I0310 19:54:59.155983    3088 pod_ready.go:59] waiting 6m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	* I0310 19:54:59.196758    3088 pod_ready.go:97] pod "kube-scheduler-multinode-20210310194323-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 19:47:09 +0000 GMT Reason: Message:}
	* I0310 19:54:59.196758    3088 pod_ready.go:62] duration metric: took 40.7755ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	* I0310 19:54:59.196758    3088 pod_ready.go:39] duration metric: took 336.4758ms for extra waiting for kube-system core pods to be Ready ...
	* I0310 19:54:59.196758    3088 api_server.go:48] waiting for apiserver process to appear ...
	* I0310 19:54:59.207527    3088 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 19:54:59.243873    3088 ssh_runner.go:149] Run: docker images --format :
	* I0310 19:54:59.249880    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:54:59.338927    3088 command_runner.go:124] > 2555
	* I0310 19:54:59.338927    3088 api_server.go:68] duration metric: took 2.0876643s to wait for apiserver process to appear ...
	* I0310 19:54:59.338927    3088 api_server.go:84] waiting for apiserver healthz status ...
	* I0310 19:54:59.338927    3088 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55051/healthz ...
	* I0310 19:54:59.404031    3088 api_server.go:241] https://127.0.0.1:55051/healthz returned 200:
	* ok
	* I0310 19:54:59.417272    3088 api_server.go:137] control plane version: v1.20.2
	* I0310 19:54:59.417272    3088 api_server.go:127] duration metric: took 78.3455ms to wait for apiserver health ...
	* I0310 19:54:59.417272    3088 system_pods.go:41] waiting for kube-system pods to appear ...
	* I0310 19:54:59.461955    3088 system_pods.go:57] 12 kube-system pods found
	* I0310 19:54:59.466340    3088 system_pods.go:59] "coredns-74ff55c5b-jq4n9" [59fcc5d5-1d12-409a-88d8-46674adeb0e7] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "etcd-multinode-20210310194323-6496" [7355a92f-158f-4d8e-888d-9fe97a766922] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "kindnet-pdlkw" [bdcc23df-7069-4a7a-8cdc-89b12e006bf6] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "kindnet-vvk6s" [dba33385-2929-47cf-a14a-869967740392] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "kindnet-xn5hd" [41dfeb11-7af6-449b-999c-04fb65d2ba9d] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "kube-apiserver-multinode-20210310194323-6496" [9c82174a-7835-4268-832f-b5d33ee4ed77] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "kube-controller-manager-multinode-20210310194323-6496" [052eef6a-337b-4476-9681-5695f0e3ee90] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "kube-proxy-7rchm" [6247bab9-80ef-438a-806a-0c19ed9c39a2] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "kube-proxy-gjbjj" [af273b96-644c-4e71-82d0-b375b373a1df] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "kube-proxy-tdzlb" [d613357b-ba23-4106-8b5e-a32483597686] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "kube-scheduler-multinode-20210310194323-6496" [adc66c6d-e5b0-4c6b-b548-febdfb7a55fb] Running
	* I0310 19:54:59.466469    3088 system_pods.go:59] "storage-provisioner" [75d9e0a4-c70e-445c-af14-4db9ef305719] Running
	* I0310 19:54:59.466469    3088 system_pods.go:72] duration metric: took 49.197ms to wait for pod list to return data ...
	* I0310 19:54:59.466469    3088 default_sa.go:33] waiting for default service account to be created ...
	* I0310 19:54:59.506417    3088 default_sa.go:44] found service account: "default"
	* I0310 19:54:59.506417    3088 default_sa.go:54] duration metric: took 39.948ms for default service account to be created ...
	* I0310 19:54:59.506704    3088 system_pods.go:114] waiting for k8s-apps to be running ...
	* I0310 19:54:59.539551    3088 system_pods.go:84] 12 kube-system pods found
	* I0310 19:54:59.539780    3088 system_pods.go:87] "coredns-74ff55c5b-jq4n9" [59fcc5d5-1d12-409a-88d8-46674adeb0e7] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "etcd-multinode-20210310194323-6496" [7355a92f-158f-4d8e-888d-9fe97a766922] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "kindnet-pdlkw" [bdcc23df-7069-4a7a-8cdc-89b12e006bf6] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "kindnet-vvk6s" [dba33385-2929-47cf-a14a-869967740392] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "kindnet-xn5hd" [41dfeb11-7af6-449b-999c-04fb65d2ba9d] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "kube-apiserver-multinode-20210310194323-6496" [9c82174a-7835-4268-832f-b5d33ee4ed77] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "kube-controller-manager-multinode-20210310194323-6496" [052eef6a-337b-4476-9681-5695f0e3ee90] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "kube-proxy-7rchm" [6247bab9-80ef-438a-806a-0c19ed9c39a2] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "kube-proxy-gjbjj" [af273b96-644c-4e71-82d0-b375b373a1df] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "kube-proxy-tdzlb" [d613357b-ba23-4106-8b5e-a32483597686] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "kube-scheduler-multinode-20210310194323-6496" [adc66c6d-e5b0-4c6b-b548-febdfb7a55fb] Running
	* I0310 19:54:59.539780    3088 system_pods.go:87] "storage-provisioner" [75d9e0a4-c70e-445c-af14-4db9ef305719] Running
	* I0310 19:54:59.539780    3088 system_pods.go:124] duration metric: took 33.0755ms to wait for k8s-apps to be running ...
	* I0310 19:54:59.539780    3088 system_svc.go:44] waiting for kubelet service to be running ....
	* I0310 19:54:59.560075    3088 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	* I0310 19:54:59.568653    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	* I0310 19:54:59.608964    3088 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	* I0310 19:54:59.608964    3088 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	* I0310 19:54:59.620954    3088 system_svc.go:56] duration metric: took 81.1748ms WaitForService to wait for kubelet.
	* I0310 19:54:59.620954    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	* I0310 19:54:59.620954    3088 node_ready.go:35] waiting 6m0s for node status to be ready ...
	* I0310 19:54:59.642742    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:54:59.854178    3088 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	* I0310 19:54:59.908586    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	* I0310 19:55:00.169046    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:00.203404    3088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55054 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	* I0310 19:55:00.650896    3088 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	* I0310 19:55:00.660519    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:00.877115    3088 command_runner.go:124] > serviceaccount/storage-provisioner unchanged
	* I0310 19:55:00.877299    3088 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	* I0310 19:55:00.877299    3088 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	* I0310 19:55:00.877299    3088 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	* I0310 19:55:00.877299    3088 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath unchanged
	* I0310 19:55:00.877299    3088 command_runner.go:124] > pod/storage-provisioner configured
	* I0310 19:55:00.877447    3088 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0232704s)
	* I0310 19:55:00.877588    3088 command_runner.go:124] > kindest/kindnetd:v20210220-5b7e6d01
	* I0310 19:55:00.878129    3088 command_runner.go:124] > k8s.gcr.io/kube-proxy:v1.20.2
	* I0310 19:55:00.878129    3088 command_runner.go:124] > k8s.gcr.io/kube-controller-manager:v1.20.2
	* I0310 19:55:00.878129    3088 command_runner.go:124] > k8s.gcr.io/kube-apiserver:v1.20.2
	* I0310 19:55:00.878129    3088 command_runner.go:124] > k8s.gcr.io/kube-scheduler:v1.20.2
	* I0310 19:55:00.878129    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210105233232-2512
	* I0310 19:55:00.878129    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106002159-6856
	* I0310 19:55:00.878129    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106011107-6492
	* I0310 19:55:00.878297    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210106215525-1984
	* I0310 19:55:00.878297    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107002220-9088
	* I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210107190945-8748
	* I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210112045103-7160
	* I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210114204234-6692
	* I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115023213-8464
	* I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210115191024-3516
	* I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210119220838-6552
	* I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120022529-1140
	* I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120175851-7432
	* I0310 19:55:00.878407    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120214442-10992
	* I0310 19:55:00.878563    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210120231122-7024
	* I0310 19:55:00.878563    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210123004019-5372
	* I0310 19:55:00.878563    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210126212539-5172
	* I0310 19:55:00.878694    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210128021318-232
	* I0310 19:55:00.878694    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210212145109-352
	* I0310 19:55:00.878694    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210213143925-7440
	* I0310 19:55:00.878694    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219145454-9520
	* I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210219220622-3920
	* I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210220004129-7452
	* I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210224014800-800
	* I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210225231842-5736
	* I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210301195830-5700
	* I0310 19:55:00.878831    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210303214129-4588
	* I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304002630-1156
	* I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210304184021-4052
	* I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210306072141-12056
	* I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210308233820-5396
	* I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210309234032-4944
	* I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310083645-5040
	* I0310 19:55:00.878964    3088 command_runner.go:124] > minikube-local-cache-test:functional-20210310191609-6496
	* I0310 19:55:00.878964    3088 command_runner.go:124] > kubernetesui/dashboard:v2.1.0
	* I0310 19:55:00.878964    3088 command_runner.go:124] > gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 19:55:00.879095    3088 command_runner.go:124] > k8s.gcr.io/etcd:3.4.13-0
	* I0310 19:55:00.879229    3088 command_runner.go:124] > k8s.gcr.io/coredns:1.7.0
	* I0310 19:55:00.879229    3088 command_runner.go:124] > kubernetesui/metrics-scraper:v1.0.4
	* I0310 19:55:00.879229    3088 command_runner.go:124] > k8s.gcr.io/pause:3.2
	* I0310 19:55:00.879364    3088 ssh_runner.go:189] Completed: docker images --format :: (1.6354929s)
	* I0310 19:55:00.879524    3088 docker.go:423] Got preloaded images: -- stdout --
	* kindest/kindnetd:v20210220-5b7e6d01
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* minikube-local-cache-test:functional-20210105233232-2512
	* minikube-local-cache-test:functional-20210106002159-6856
	* minikube-local-cache-test:functional-20210106011107-6492
	* minikube-local-cache-test:functional-20210106215525-1984
	* minikube-local-cache-test:functional-20210107002220-9088
	* minikube-local-cache-test:functional-20210107190945-8748
	* minikube-local-cache-test:functional-20210112045103-7160
	* minikube-local-cache-test:functional-20210114204234-6692
	* minikube-local-cache-test:functional-20210115023213-8464
	* minikube-local-cache-test:functional-20210115191024-3516
	* minikube-local-cache-test:functional-20210119220838-6552
	* minikube-local-cache-test:functional-20210120022529-1140
	* minikube-local-cache-test:functional-20210120175851-7432
	* minikube-local-cache-test:functional-20210120214442-10992
	* minikube-local-cache-test:functional-20210120231122-7024
	* minikube-local-cache-test:functional-20210123004019-5372
	* minikube-local-cache-test:functional-20210126212539-5172
	* minikube-local-cache-test:functional-20210128021318-232
	* minikube-local-cache-test:functional-20210212145109-352
	* minikube-local-cache-test:functional-20210213143925-7440
	* minikube-local-cache-test:functional-20210219145454-9520
	* minikube-local-cache-test:functional-20210219220622-3920
	* minikube-local-cache-test:functional-20210220004129-7452
	* minikube-local-cache-test:functional-20210224014800-800
	* minikube-local-cache-test:functional-20210225231842-5736
	* minikube-local-cache-test:functional-20210301195830-5700
	* minikube-local-cache-test:functional-20210303214129-4588
	* minikube-local-cache-test:functional-20210304002630-1156
	* minikube-local-cache-test:functional-20210304184021-4052
	* minikube-local-cache-test:functional-20210306072141-12056
	* minikube-local-cache-test:functional-20210308233820-5396
	* minikube-local-cache-test:functional-20210309234032-4944
	* minikube-local-cache-test:functional-20210310083645-5040
	* minikube-local-cache-test:functional-20210310191609-6496
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 19:55:00.879656    3088 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 19:55:00.895461    3088 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496-m02 --format=
	* I0310 19:55:01.187118    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:01.362649    3088 command_runner.go:124] > storageclass.storage.k8s.io/standard unchanged
	* I0310 19:55:01.376100    3088 out.go:129] * Enabled addons: storage-provisioner, default-storageclass
	* I0310 19:55:01.376796    3088 addons.go:383] enableAddons completed in 4.1255366s
	* I0310 19:55:01.446555    3088 cache_images.go:223] succeeded pushing to: multinode-20210310194323-6496
	* I0310 19:55:01.446728    3088 cache_images.go:224] failed pushing to: 
	* I0310 19:55:01.658956    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:02.157211    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:02.653984    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:03.158920    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:03.657505    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:04.157433    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:04.653087    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:05.157187    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:05.656224    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:06.159413    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:06.655559    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:07.165437    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:07.656295    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:08.155232    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:08.656917    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:09.161488    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:09.657957    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:10.156400    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:10.657853    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:11.156964    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:11.656354    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:12.154718    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:12.658308    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:13.157836    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:13.657363    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:14.154740    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:14.662129    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:15.160004    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:15.654585    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:16.156021    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:16.656480    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:17.160704    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:17.656547    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:18.156754    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:18.658795    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:19.155885    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:19.655504    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:20.160434    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:20.655363    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:21.156938    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:21.654766    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:22.157150    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:22.655662    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:23.159037    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:23.657126    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:24.158200    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:24.654944    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:25.157384    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:25.656880    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:26.158158    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:26.656572    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:27.155605    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:27.655858    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:28.157464    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:28.656867    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:29.156975    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:29.654696    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:30.159376    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:30.653548    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:31.157942    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:31.657305    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:32.153722    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:32.656766    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:33.156865    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:33.656468    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:34.158709    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:34.655808    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:35.158493    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:35.660121    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:36.156479    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:36.656060    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:37.155784    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:37.657830    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:38.173835    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:38.657850    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:39.156071    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:39.657528    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:40.159150    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:40.654955    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:41.155863    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:41.657291    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:42.155204    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:42.656924    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:43.154392    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:43.656440    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:44.157765    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:44.657233    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:45.179748    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:45.655647    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:46.155740    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:46.655698    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:47.157766    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:47.655355    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:48.158969    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:48.657595    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:49.157245    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:49.662388    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:50.157630    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:50.657874    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:51.155743    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:51.654214    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:52.155338    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:52.657688    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:53.156430    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:53.655104    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:54.159466    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:54.656006    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:55.158227    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:55.659268    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:56.157930    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:56.657678    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:57.156642    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:57.653239    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:58.157236    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:58.655053    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:59.158413    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:55:59.658634    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:00.156023    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:00.656554    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:01.156462    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:01.661017    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:02.157799    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:02.657140    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:03.154792    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:03.654522    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:04.158493    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:04.658701    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:05.155181    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:05.655103    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:06.161872    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:06.661695    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:07.156135    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:07.656181    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:08.156360    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:08.655759    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:09.159324    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:09.656531    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:10.159051    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:10.659132    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:11.155508    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:11.656717    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:12.156677    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:12.655494    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:13.157888    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:13.656123    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:14.157229    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:14.664088    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:15.154893    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:15.663969    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:16.156964    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:16.659989    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:17.157512    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:17.656274    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:18.158037    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:18.656000    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:19.155825    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:19.662432    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:20.155430    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:20.657460    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:21.158620    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:21.657227    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:22.158287    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:22.653550    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:23.163139    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:23.656589    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:24.158911    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:24.658728    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:25.158117    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:25.657501    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:26.156834    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:26.655162    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:27.155513    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:27.657240    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:28.157125    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:28.675973    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:29.154448    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:29.655673    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:30.155675    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:30.654819    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:31.157685    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:31.654513    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:32.155845    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:32.655364    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:33.154873    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:33.657659    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:34.162701    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:34.655454    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:35.156204    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:35.657330    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:36.155480    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:36.654096    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:37.154324    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:37.653929    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:38.155778    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:38.657155    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:39.155429    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:39.658009    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:40.157958    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:40.658499    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:41.155866    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:41.664423    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:42.157328    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:42.654588    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:43.156868    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:43.654773    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:44.155838    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:44.656910    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:45.156977    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:45.656955    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:46.162519    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:46.657318    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:47.157439    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:47.657193    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:48.157026    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:48.679250    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:49.156701    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:49.656311    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:50.155066    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:50.657238    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:51.158828    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:51.655236    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:52.156817    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:52.656865    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:53.159632    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:53.655440    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:54.155312    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:54.656768    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:55.154611    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:55.660652    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:56.160234    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:56.665781    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:57.161653    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:57.657644    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:58.158743    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:58.653965    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:59.159868    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:56:59.661061    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:00.154404    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:00.656908    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:01.159672    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:01.655863    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:02.155350    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:02.657057    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:03.156961    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:03.655482    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:04.156736    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:04.654757    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:05.157925    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:05.656511    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:06.157967    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:06.653917    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:07.160561    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:07.657101    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:08.157730    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:08.656908    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:09.155858    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:09.660586    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:10.155369    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:10.659914    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:11.157912    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:11.665628    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:12.157681    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:12.656357    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:13.156842    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:13.656124    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:14.156031    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:14.655199    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:15.162245    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:15.658341    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:16.156941    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:16.652987    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:17.153709    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:17.657683    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:18.156432    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:18.656255    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:19.155714    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:19.661396    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:20.163592    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:20.656670    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:21.158851    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:21.654955    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:22.154920    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:22.657363    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:23.156008    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:23.654510    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:24.154717    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:24.656004    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:25.156921    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:25.656866    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:26.158274    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:26.657601    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:27.157371    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:27.657123    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:28.156575    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:28.654696    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:29.160636    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:29.657824    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:30.158129    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:30.662185    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:31.162157    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:31.656265    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:32.156842    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:32.659045    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:33.157942    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:33.667018    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:34.155406    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:34.660255    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:35.163181    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:35.656340    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:36.156012    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:36.656306    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:37.157627    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:37.656956    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:38.159163    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:38.654105    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:39.158308    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:39.658748    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:40.156719    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:40.659436    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:41.155789    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:41.655570    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:42.161602    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:42.655017    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:43.159460    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:43.657112    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:44.160833    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:44.657607    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:45.155416    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:45.659588    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:46.157889    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:46.663795    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:47.154253    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:47.655957    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:48.157635    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:48.659811    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:49.162141    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:49.664452    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:50.156315    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:50.654261    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:51.157262    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:51.658503    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:52.160770    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:52.654824    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:53.154818    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:53.654854    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:54.156377    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:54.657292    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:55.156133    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:55.655392    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:56.160325    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:56.655822    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:57.156350    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:57.659235    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:58.158896    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:58.656465    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:59.161883    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:57:59.656098    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:00.157648    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:00.655074    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:01.156203    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:01.652445    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:02.155121    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:02.657376    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:03.154822    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:03.660854    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:04.156702    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:04.654780    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:05.157860    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:05.660528    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:06.159049    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:06.660096    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:07.156758    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:07.659348    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:08.157356    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:08.655549    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:09.156750    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:09.656885    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:10.160393    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:10.657638    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:11.157182    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:11.661014    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:12.158692    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:12.655526    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:13.154693    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:13.663340    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:14.158707    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:14.657955    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:15.156320    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:15.658769    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:16.159661    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:16.660793    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:17.158432    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:17.654737    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:18.154891    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:18.653445    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:19.159260    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:19.655723    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:20.157823    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:20.654748    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:21.155122    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:21.654752    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:22.155963    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:22.654682    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:23.157664    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:23.660431    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:24.159174    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:24.663403    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:25.157959    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:25.657595    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:26.157256    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:26.656198    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:27.159409    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:27.658354    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:28.154801    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:28.660395    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:29.155348    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:29.658268    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:30.159587    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:30.654883    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:31.168849    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:31.655835    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:32.155871    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:32.655887    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:33.155804    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:33.656100    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:34.157379    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:34.655254    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:35.159088    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:35.656195    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:36.155392    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:36.656465    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:37.156601    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:37.655282    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:38.155758    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:38.663141    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:39.158837    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:39.664748    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:40.163536    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:40.657971    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:41.156325    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:41.658398    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:42.155341    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:42.655184    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:43.155304    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:43.655098    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:44.155078    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:44.657859    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:45.157941    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:45.657436    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:46.155235    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:46.656594    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:47.157904    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:47.654416    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:48.156826    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:48.658688    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:49.156297    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:49.664615    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:50.160595    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:50.654729    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:51.160150    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:51.656223    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:52.160248    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:52.655049    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:53.159994    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:53.657611    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:54.159545    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:54.659119    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:55.156960    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:55.656232    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:56.155994    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:56.657290    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:57.156236    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:57.653996    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:58.154749    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:58.655552    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:59.157881    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:59.660949    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:59.674369    3088 node_ready.go:53] node "multinode-20210310194323-6496-m02" has unwanted condition "Ready" : Reason "NodeStatusUnknown" Message: "Kubelet stopped posting node status.". will try. 
	* I0310 19:58:59.674771    3088 node_ready.go:38] duration metric: took 4m0.0541379s to wait for WaitForNodeReady...
	* I0310 19:58:59.680515    3088 out.go:129] 
	* W0310 19:58:59.680879    3088 out.go:191] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: wait node ready: timed out waiting for the condition
	* W0310 19:58:59.681349    3088 out.go:191] * 
	* W0310 19:58:59.681773    3088 out.go:191] * If the above advice does not help, please let us know: 
	* W0310 19:58:59.681773    3088 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 19:59:11.066083    1664 out.go:340] unable to execute * 2021-03-10 19:51:12.134093 W | etcdserver: request "header:<ID:10490704450842146914 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:828 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1039 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (345.2272ms) to execute
	: html/template:* 2021-03-10 19:51:12.134093 W | etcdserver: request "header:<ID:10490704450842146914 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:828 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1039 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (345.2272ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 19:59:13.516746    1664 out.go:335] unable to parse "* I0310 19:53:06.909443    3088 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 19:53:06.909443    3088 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 19:59:13.539569    1664 out.go:335] unable to parse "* I0310 19:53:07.722578    3088 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 19:53:07.722578    3088 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 19:59:13.646957    1664 out.go:340] unable to execute * I0310 19:53:14.065366    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:14.065366    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:14.065366    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:13.656797    1664 out.go:335] unable to parse "* I0310 19:53:14.575546    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}\n": template: * I0310 19:53:14.575546    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 19:59:13.675747    1664 out.go:340] unable to execute * I0310 19:53:14.863867    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:14.863867    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:14.863867    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:13.688585    1664 out.go:335] unable to parse "* I0310 19:53:15.366492    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}\n": template: * I0310 19:53:15.366492    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 19:59:13.790250    1664 out.go:340] unable to execute * I0310 19:53:16.584994    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:16.584994    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:16.584994    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:13.824430    1664 out.go:340] unable to execute * I0310 19:53:17.353945    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:17.353945    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:17.353945    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:13.833377    1664 out.go:335] unable to parse "* I0310 19:53:17.809160    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}\n": template: * I0310 19:53:17.809160    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 19:59:13.859966    1664 out.go:340] unable to execute * I0310 19:53:18.038079    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:18.038079    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:18.038079    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:13.871644    1664 out.go:335] unable to parse "* I0310 19:53:18.533204    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}\n": template: * I0310 19:53:18.533204    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 19:59:14.227547    1664 out.go:340] unable to execute * I0310 19:53:18.778054    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:18.778054    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:18.778054    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:14.238672    1664 out.go:335] unable to parse "* I0310 19:53:19.267977    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}\n": template: * I0310 19:53:19.267977    3088 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55054 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 19:59:14.273241    1664 out.go:340] unable to execute * I0310 19:53:19.522702    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:19.522702    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:19.522702    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:14.370412    1664 out.go:340] unable to execute * I0310 19:53:20.333096    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:20.333096    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:20.333096    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:14.404520    1664 out.go:340] unable to execute * I0310 19:53:21.458347    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:21.458347    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:21.458347    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:14.415731    1664 out.go:340] unable to execute * I0310 19:53:21.465494    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:21.465494    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:21.465494    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:14.791474    1664 out.go:340] unable to execute * I0310 19:53:23.600727    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:23.600727    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:23.600727    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:17.356647    1664 out.go:340] unable to execute * I0310 19:53:33.996052    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:53:33.996052    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:53:33.996052    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:18.207081    1664 out.go:340] unable to execute * I0310 19:54:57.465919    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:54:57.465919    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:54:57.465919    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:18.669034    1664 out.go:340] unable to execute * I0310 19:54:58.858754    3088 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496: (1.392837s)
	: template: * I0310 19:54:58.858754    3088 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496: (1.392837s)
	:1:102: executing "* I0310 19:54:58.858754    3088 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496: (1.392837s)\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:18.699476    1664 out.go:340] unable to execute * I0310 19:54:58.917128    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:54:58.917128    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:54:58.917128    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:18.778159    1664 out.go:340] unable to execute * I0310 19:54:59.249880    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:54:59.249880    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:54:59.249880    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 19:59:18.958583    1664 out.go:340] unable to execute * I0310 19:54:59.620954    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	: template: * I0310 19:54:59.620954    3088 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	:1:96: executing "* I0310 19:54:59.620954    3088 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" multinode-20210310194323-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-20210310194323-6496 -n multinode-20210310194323-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-20210310194323-6496 -n multinode-20210310194323-6496: (2.9629351s)
helpers_test.go:257: (dbg) Run:  kubectl --context multinode-20210310194323-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context multinode-20210310194323-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context multinode-20210310194323-6496 describe pod : exit status 1 (200.0585ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context multinode-20210310194323-6496 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/RestartMultiNode (380.41s)

                                                
                                    
x
+
TestSkaffold (201.88s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:56: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\skaffold.exe171585068 version
skaffold_test.go:60: skaffold version: v1.20.0
skaffold_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p skaffold-20210310201235-6496 --memory=2600 --driver=docker
skaffold_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p skaffold-20210310201235-6496 --memory=2600 --driver=docker: (2m47.4522975s)
skaffold_test.go:76: copying out/minikube-windows-amd64.exe to C:\jenkins\workspace\Docker_Windows_integration\out\minikube
skaffold_test.go:89: "minikube" is not in path
panic.go:617: *** TestSkaffold FAILED at 2021-03-10 20:15:24.775872 +0000 GMT m=+4264.473559701
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect skaffold-20210310201235-6496
helpers_test.go:231: (dbg) docker inspect skaffold-20210310201235-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "565f1f7f5ccf59065784883991443be29f4152de189914d1e2bf04a1309a84d1",
	        "Created": "2021-03-10T20:12:48.6381502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 115419,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:12:49.8825087Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/565f1f7f5ccf59065784883991443be29f4152de189914d1e2bf04a1309a84d1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/565f1f7f5ccf59065784883991443be29f4152de189914d1e2bf04a1309a84d1/hostname",
	        "HostsPath": "/var/lib/docker/containers/565f1f7f5ccf59065784883991443be29f4152de189914d1e2bf04a1309a84d1/hosts",
	        "LogPath": "/var/lib/docker/containers/565f1f7f5ccf59065784883991443be29f4152de189914d1e2bf04a1309a84d1/565f1f7f5ccf59065784883991443be29f4152de189914d1e2bf04a1309a84d1-json.log",
	        "Name": "/skaffold-20210310201235-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "skaffold-20210310201235-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-20210310201235-6496",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2726297600,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4c788f83ba2574c2a490d8e5678a45f9a0a24c6dc1a2f271a572559436ad75dd-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c788f83ba2574c2a490d8e5678a45f9a0a24c6dc1a2f271a572559436ad75dd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c788f83ba2574c2a490d8e5678a45f9a0a24c6dc1a2f271a572559436ad75dd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c788f83ba2574c2a490d8e5678a45f9a0a24c6dc1a2f271a572559436ad75dd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "skaffold-20210310201235-6496",
	                "Source": "/var/lib/docker/volumes/skaffold-20210310201235-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-20210310201235-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-20210310201235-6496",
	                "name.minikube.sigs.k8s.io": "skaffold-20210310201235-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "98db87f7573b2ef4cd7f5bf1bbd30ab90552e67772b4791b8504a36c805c287a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55073"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55070"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55071"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/98db87f7573b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-20210310201235-6496": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.97"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "565f1f7f5ccf",
	                        "skaffold-20210310201235-6496"
	                    ],
	                    "NetworkID": "50775021db71b50fa9ea8b9e3086d4629d1ae7aba6fc19e3f8464d1c44c21ca7",
	                    "EndpointID": "abc578e8093611573824c458e84a610538ffe47f209ccd1d5a4eff57a77f7c38",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:61",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p skaffold-20210310201235-6496 -n skaffold-20210310201235-6496
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p skaffold-20210310201235-6496 -n skaffold-20210310201235-6496: (2.9191899s)
helpers_test.go:240: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p skaffold-20210310201235-6496 logs -n 25
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p skaffold-20210310201235-6496 logs -n 25: (13.0718315s)
helpers_test.go:248: TestSkaffold logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:12:50 UTC, end at Wed 2021-03-10 20:15:31 UTC. --
	* Mar 10 20:14:01 skaffold-20210310201235-6496 systemd[1]: Stopping Docker Application Container Engine...
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[471]: time="2021-03-10T20:14:01.665726900Z" level=info msg="Processing signal 'terminated'"
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[471]: time="2021-03-10T20:14:01.670316800Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[471]: time="2021-03-10T20:14:01.672513000Z" level=info msg="Daemon shutdown complete"
	* Mar 10 20:14:01 skaffold-20210310201235-6496 systemd[1]: docker.service: Succeeded.
	* Mar 10 20:14:01 skaffold-20210310201235-6496 systemd[1]: Stopped Docker Application Container Engine.
	* Mar 10 20:14:01 skaffold-20210310201235-6496 systemd[1]: Starting Docker Application Container Engine...
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:01.885278800Z" level=info msg="Starting up"
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:01.892680700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:01.892785600Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:01.892835900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:01.893416700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:01.897935800Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:01.898041300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:01.898272200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:14:01 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:01.898299900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:14:06 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:06.669265800Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 20:14:06 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:06.698869900Z" level=info msg="Loading containers: start."
	* Mar 10 20:14:07 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:07.136362300Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 20:14:07 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:07.318533000Z" level=info msg="Loading containers: done."
	* Mar 10 20:14:07 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:07.385698900Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 20:14:07 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:07.385850300Z" level=info msg="Daemon has completed initialization"
	* Mar 10 20:14:07 skaffold-20210310201235-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 20:14:07 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:07.500029800Z" level=info msg="API listen on [::]:2376"
	* Mar 10 20:14:07 skaffold-20210310201235-6496 dockerd[748]: time="2021-03-10T20:14:07.507611500Z" level=info msg="API listen on /var/run/docker.sock"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	* 2c656fbde1428       bfe3a36ebd252       19 seconds ago       Running             coredns                   0                   bd30b8a2ae385
	* 789fc6aaabe7b       43154ddb57a83       21 seconds ago       Running             kube-proxy                0                   e5d464cf8018e
	* 27bd520fd3190       85069258b98ac       21 seconds ago       Running             storage-provisioner       0                   a74312df8bc76
	* 9aa84ea5337ed       ed2c44fbdd78b       About a minute ago   Running             kube-scheduler            0                   8e904f12749bd
	* 5b63133de2a4c       a8c2fdb8bf76e       About a minute ago   Running             kube-apiserver            0                   761eaa588d51f
	* 97808afe7a7b7       a27166429d98e       About a minute ago   Running             kube-controller-manager   0                   fb4877a840fb9
	* eb42b8f9460e8       0369cf4303ffd       About a minute ago   Running             etcd                      0                   e3375a184cdcb
	* 
	* ==> coredns [2c656fbde142] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* 
	* ==> describe nodes <==
	* Name:               skaffold-20210310201235-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=skaffold-20210310201235-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=skaffold-20210310201235-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T20_14_46_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 20:14:42 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  skaffold-20210310201235-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 20:15:28 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 20:15:22 +0000   Wed, 10 Mar 2021 20:14:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 20:15:22 +0000   Wed, 10 Mar 2021 20:14:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 20:15:22 +0000   Wed, 10 Mar 2021 20:14:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 20:15:22 +0000   Wed, 10 Mar 2021 20:15:01 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.97
	*   Hostname:    skaffold-20210310201235-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                e709317d-eee8-49b9-8f62-c12cd1575e62
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-74ff55c5b-wg9rq                                 100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	*   kube-system                 etcd-skaffold-20210310201235-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	*   kube-system                 kube-apiserver-skaffold-20210310201235-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         41s
	*   kube-system                 kube-controller-manager-skaffold-20210310201235-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	*   kube-system                 kube-proxy-qvctf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	*   kube-system                 kube-scheduler-skaffold-20210310201235-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	*   kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                From        Message
	*   ----    ------                   ----               ----        -------
	*   Normal  NodeHasSufficientMemory  65s (x7 over 66s)  kubelet     Node skaffold-20210310201235-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    65s (x6 over 66s)  kubelet     Node skaffold-20210310201235-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     65s (x6 over 66s)  kubelet     Node skaffold-20210310201235-6496 status is now: NodeHasSufficientPID
	*   Normal  Starting                 44s                kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  43s                kubelet     Node skaffold-20210310201235-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    43s                kubelet     Node skaffold-20210310201235-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     43s                kubelet     Node skaffold-20210310201235-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             43s                kubelet     Node skaffold-20210310201235-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  42s                kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                31s                kubelet     Node skaffold-20210310201235-6496 status is now: NodeReady
	*   Normal  Starting                 19s                kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [eb42b8f9460e] <==
	* 2021-03-10 20:14:42.240650 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (105.6731ms) to execute
	* 2021-03-10 20:14:42.241240 W | etcdserver: read-only range request "key:\"/registry/csinodes/skaffold-20210310201235-6496\" " with result "range_response_count:0 size:4" took too long (166.8956ms) to execute
	* 2021-03-10 20:14:49.481746 W | etcdserver: read-only range request "key:\"/registry/minions/skaffold-20210310201235-6496\" " with result "range_response_count:1 size:5245" took too long (250.1025ms) to execute
	* 2021-03-10 20:14:49.481988 W | etcdserver: request "header:<ID:10490704451290621844 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/skaffold-20210310201235-6496.166b145ee477dbb0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/skaffold-20210310201235-6496.166b145ee477dbb0\" value_size:636 lease:1267332414435845947 >> failure:<>>" with result "size:16" took too long (211.8104ms) to execute
	* 2021-03-10 20:14:59.548462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:15:04.930464 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (288.1545ms) to execute
	* 2021-03-10 20:15:04.931269 W | etcdserver: request "header:<ID:10490704451290622131 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/skaffold-20210310201235-6496\" mod_revision:378 > success:<request_put:<key:\"/registry/minions/skaffold-20210310201235-6496\" value_size:5298 >> failure:<request_range:<key:\"/registry/minions/skaffold-20210310201235-6496\" > >>" with result "size:5529" took too long (167.5234ms) to execute
	* 2021-03-10 20:15:04.931499 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (303.8542ms) to execute
	* 2021-03-10 20:15:04.950665 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-public/default\" " with result "range_response_count:1 size:181" took too long (266.9329ms) to execute
	* 2021-03-10 20:15:04.951354 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:729" took too long (289.8881ms) to execute
	* 2021-03-10 20:15:04.988416 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/default\" " with result "range_response_count:1 size:181" took too long (329.4295ms) to execute
	* 2021-03-10 20:15:05.153288 W | etcdserver: request "header:<ID:10490704451290622140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:257 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3511 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>" with result "size:16" took too long (121.1325ms) to execute
	* 2021-03-10 20:15:05.328485 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-node-lease/default\" " with result "range_response_count:1 size:189" took too long (198.1324ms) to execute
	* 2021-03-10 20:15:05.398738 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (265.2461ms) to execute
	* 2021-03-10 20:15:05.634743 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:173" took too long (241.8925ms) to execute
	* 2021-03-10 20:15:05.647314 W | etcdserver: read-only range request "key:\"/registry/services/specs/kube-system/kube-dns\" " with result "range_response_count:1 size:1172" took too long (254.7641ms) to execute
	* 2021-03-10 20:15:05.791238 W | etcdserver: request "header:<ID:10490704451290622154 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:279 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:2668 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (317.1241ms) to execute
	* 2021-03-10 20:15:05.846419 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (502.3368ms) to execute
	* 2021-03-10 20:15:06.035775 W | etcdserver: read-only range request "key:\"/registry/daemonsets/kube-system/kube-proxy\" " with result "range_response_count:1 size:2887" took too long (635.6985ms) to execute
	* 2021-03-10 20:15:06.038958 W | etcdserver: read-only range request "key:\"/registry/clusterroles/edit\" " with result "range_response_count:1 size:3252" took too long (696.3428ms) to execute
	* 2021-03-10 20:15:06.067554 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:0 size:5" took too long (137.1998ms) to execute
	* 2021-03-10 20:15:06.135651 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:15:06.140412 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3575" took too long (102.3476ms) to execute
	* 2021-03-10 20:15:16.050371 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 20:15:26.049557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  20:15:32 up  1:15,  0 users,  load average: 9.89, 6.26, 4.96
	* Linux skaffold-20210310201235-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [5b63133de2a4] <==
	* I0310 20:15:06.080045       1 trace.go:205] Trace[1604863634]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/kube-dns-b9d4s,user-agent:kube-controller-manager/v1.20.2 (linux/amd64) kubernetes/faecb19/system:serviceaccount:kube-system:endpointslice-controller,client:192.168.49.97 (10-Mar-2021 20:15:05.543) (total time: 536ms):
	* Trace[1604863634]: ---"Object stored in database" 536ms (20:15:00.079)
	* Trace[1604863634]: [536.4211ms] [536.4211ms] END
	* I0310 20:15:06.085783       1 trace.go:205] Trace[2053730834]: "Create" url:/api/v1/namespaces/kube-system/secrets,user-agent:kube-controller-manager/v1.20.2 (linux/amd64) kubernetes/faecb19/tokens-controller,client:192.168.49.97 (10-Mar-2021 20:15:05.461) (total time: 624ms):
	* Trace[2053730834]: ---"Object stored in database" 622ms (20:15:00.083)
	* Trace[2053730834]: [624.2581ms] [624.2581ms] END
	* I0310 20:15:06.139634       1 trace.go:205] Trace[1187708964]: "GuaranteedUpdate etcd3" type:*core.Pod (10-Mar-2021 20:15:05.561) (total time: 577ms):
	* Trace[1187708964]: ---"Transaction committed" 576ms (20:15:00.138)
	* Trace[1187708964]: [577.1229ms] [577.1229ms] END
	* I0310 20:15:06.139834       1 trace.go:205] Trace[995552976]: "Create" url:/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-wg9rq/binding,user-agent:kube-scheduler/v1.20.2 (linux/amd64) kubernetes/faecb19/scheduler,client:192.168.49.97 (10-Mar-2021 20:15:05.560) (total time: 579ms):
	* Trace[995552976]: ---"Object stored in database" 579ms (20:15:00.139)
	* Trace[995552976]: [579.4817ms] [579.4817ms] END
	* I0310 20:15:06.143968       1 trace.go:205] Trace[68794498]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kube-scheduler/v1.20.2 (linux/amd64) kubernetes/faecb19/scheduler,client:192.168.49.97 (10-Mar-2021 20:15:05.542) (total time: 601ms):
	* Trace[68794498]: ---"Object stored in database" 593ms (20:15:00.143)
	* Trace[68794498]: [601.8783ms] [601.8783ms] END
	* I0310 20:15:06.154712       1 trace.go:205] Trace[1940115971]: "GuaranteedUpdate etcd3" type:*apps.Deployment (10-Mar-2021 20:15:05.455) (total time: 699ms):
	* Trace[1940115971]: [699.0948ms] [699.0948ms] END
	* I0310 20:15:06.154807       1 trace.go:205] Trace[1053165256]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.20.2 (linux/amd64) kubernetes/faecb19/system:serviceaccount:kube-system:deployment-controller,client:192.168.49.97 (10-Mar-2021 20:15:05.455) (total time: 699ms):
	* Trace[1053165256]: [699.4964ms] [699.4964ms] END
	* I0310 20:15:06.164719       1 trace.go:205] Trace[510983194]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 20:15:05.557) (total time: 606ms):
	* Trace[510983194]: ---"initial value restored" 594ms (20:15:00.152)
	* Trace[510983194]: [606.792ms] [606.792ms] END
	* I0310 20:15:12.725714       1 client.go:360] parsed scheme: "passthrough"
	* I0310 20:15:12.725848       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 20:15:12.725875       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [97808afe7a7b] <==
	* I0310 20:15:04.182203       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 20:15:04.182219       1 disruption.go:339] Sending events to api server.
	* I0310 20:15:04.188857       1 shared_informer.go:247] Caches are synced for taint 
	* I0310 20:15:04.189050       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* W0310 20:15:04.189303       1 node_lifecycle_controller.go:1044] Missing timestamp for Node skaffold-20210310201235-6496. Assuming now as a timestamp.
	* I0310 20:15:04.189424       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I0310 20:15:04.189862       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* I0310 20:15:04.190596       1 event.go:291] "Event occurred" object="skaffold-20210310201235-6496" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node skaffold-20210310201235-6496 event: Registered Node skaffold-20210310201235-6496 in Controller"
	* I0310 20:15:04.190923       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I0310 20:15:04.230027       1 shared_informer.go:247] Caches are synced for attach detach 
	* I0310 20:15:04.239642       1 shared_informer.go:247] Caches are synced for deployment 
	* I0310 20:15:04.274013       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 20:15:04.325786       1 shared_informer.go:247] Caches are synced for endpoint 
	* I0310 20:15:04.326393       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 20:15:04.332522       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	* I0310 20:15:04.346362       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	* I0310 20:15:04.627459       1 range_allocator.go:373] Set node skaffold-20210310201235-6496 PodCIDR to [10.244.0.0/24]
	* I0310 20:15:04.787588       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
	* I0310 20:15:04.828496       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 20:15:04.980320       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qvctf"
	* I0310 20:15:05.333041       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 20:15:05.333350       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 20:15:05.333365       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 20:15:05.735018       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-wg9rq"
	* E0310 20:15:06.052772       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	* 
	* ==> kube-proxy [789fc6aaabe7] <==
	* I0310 20:15:13.039032       1 node.go:172] Successfully retrieved node IP: 192.168.49.97
	* I0310 20:15:13.039170       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.97), assume IPv4 operation
	* W0310 20:15:13.184346       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 20:15:13.185011       1 server_others.go:185] Using iptables Proxier.
	* I0310 20:15:13.186094       1 server.go:650] Version: v1.20.2
	* I0310 20:15:13.188057       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 20:15:13.188198       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 20:15:13.188275       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 20:15:13.190136       1 config.go:224] Starting endpoint slice config controller
	* I0310 20:15:13.190164       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 20:15:13.190224       1 config.go:315] Starting service config controller
	* I0310 20:15:13.190233       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 20:15:13.325684       1 shared_informer.go:247] Caches are synced for service config 
	* I0310 20:15:13.326145       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-scheduler [9aa84ea5337e] <==
	* E0310 20:14:42.151723       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:14:42.152306       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 20:14:42.152739       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 20:14:42.156120       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:14:42.158778       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:14:42.161026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:14:42.162536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:14:42.164523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:14:42.167581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:14:42.170685       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:14:42.174712       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:14:42.229257       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:14:43.029681       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:14:43.228916       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:14:43.275426       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:14:43.295634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:14:43.342323       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:14:43.347407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:14:43.375419       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 20:14:43.400225       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:14:43.458310       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:14:43.489992       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:14:43.533010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:14:43.596579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* I0310 20:14:45.438225       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:12:50 UTC, end at Wed 2021-03-10 20:15:33 UTC. --
	* Mar 10 20:14:51 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:14:51.880043    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/e5f85b33f486f78731f230f0057d5eed-etcd-certs") pod "etcd-skaffold-20210310201235-6496" (UID: "e5f85b33f486f78731f230f0057d5eed")
	* Mar 10 20:14:51 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:14:51.883030    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/906db1d630d3e27d87ec5bf8a9967c21-k8s-certs") pod "kube-apiserver-skaffold-20210310201235-6496" (UID: "906db1d630d3e27d87ec5bf8a9967c21")
	* Mar 10 20:14:51 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:14:51.883410    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/57b8c22dbe6410e4bd36cf14b0f8bdc7-flexvolume-dir") pod "kube-controller-manager-skaffold-20210310201235-6496" (UID: "57b8c22dbe6410e4bd36cf14b0f8bdc7")
	* Mar 10 20:14:51 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:14:51.883437    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/906db1d630d3e27d87ec5bf8a9967c21-ca-certs") pod "kube-apiserver-skaffold-20210310201235-6496" (UID: "906db1d630d3e27d87ec5bf8a9967c21")
	* Mar 10 20:14:51 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:14:51.883454    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/906db1d630d3e27d87ec5bf8a9967c21-usr-share-ca-certificates") pod "kube-apiserver-skaffold-20210310201235-6496" (UID: "906db1d630d3e27d87ec5bf8a9967c21")
	* Mar 10 20:14:51 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:14:51.883464    3075 reconciler.go:157] Reconciler: start to sync state
	* Mar 10 20:15:04 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:04.460035    3075 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	* Mar 10 20:15:04 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:04.470493    3075 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
	* Mar 10 20:15:04 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:04.470964    3075 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	* Mar 10 20:15:05 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:05.942568    3075 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 20:15:06 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:06.026947    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/e1deb0b4-5833-462e-a638-e547b5015ed0-lib-modules") pod "kube-proxy-qvctf" (UID: "e1deb0b4-5833-462e-a638-e547b5015ed0")
	* Mar 10 20:15:06 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:06.028051    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e1deb0b4-5833-462e-a638-e547b5015ed0-kube-proxy") pod "kube-proxy-qvctf" (UID: "e1deb0b4-5833-462e-a638-e547b5015ed0")
	* Mar 10 20:15:06 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:06.028787    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/e1deb0b4-5833-462e-a638-e547b5015ed0-xtables-lock") pod "kube-proxy-qvctf" (UID: "e1deb0b4-5833-462e-a638-e547b5015ed0")
	* Mar 10 20:15:06 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:06.029986    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-vwbgp" (UniqueName: "kubernetes.io/secret/e1deb0b4-5833-462e-a638-e547b5015ed0-kube-proxy-token-vwbgp") pod "kube-proxy-qvctf" (UID: "e1deb0b4-5833-462e-a638-e547b5015ed0")
	* Mar 10 20:15:06 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:06.274619    3075 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 20:15:06 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:06.362804    3075 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 20:15:06 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:06.450958    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/d2bb8a00-af67-4503-8b44-ef8a7b77933b-tmp") pod "storage-provisioner" (UID: "d2bb8a00-af67-4503-8b44-ef8a7b77933b")
	* Mar 10 20:15:06 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:06.468477    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-tgjds" (UniqueName: "kubernetes.io/secret/d2bb8a00-af67-4503-8b44-ef8a7b77933b-storage-provisioner-token-tgjds") pod "storage-provisioner" (UID: "d2bb8a00-af67-4503-8b44-ef8a7b77933b")
	* Mar 10 20:15:06 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:06.469372    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5ba097b-f8a0-4a58-9f5b-9f4648645c3d-config-volume") pod "coredns-74ff55c5b-wg9rq" (UID: "c5ba097b-f8a0-4a58-9f5b-9f4648645c3d")
	* Mar 10 20:15:06 skaffold-20210310201235-6496 kubelet[3075]: I0310 20:15:06.469637    3075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-n9zcs" (UniqueName: "kubernetes.io/secret/c5ba097b-f8a0-4a58-9f5b-9f4648645c3d-coredns-token-n9zcs") pod "coredns-74ff55c5b-wg9rq" (UID: "c5ba097b-f8a0-4a58-9f5b-9f4648645c3d")
	* Mar 10 20:15:12 skaffold-20210310201235-6496 kubelet[3075]: W0310 20:15:12.476222    3075 pod_container_deletor.go:79] Container "bd30b8a2ae38563256a942a1761e915ae72f8a8c1a5e008542e91fde65e6a273" not found in pod's containers
	* Mar 10 20:15:12 skaffold-20210310201235-6496 kubelet[3075]: W0310 20:15:12.540824    3075 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-wg9rq through plugin: invalid network status for
	* Mar 10 20:15:12 skaffold-20210310201235-6496 kubelet[3075]: W0310 20:15:12.568340    3075 pod_container_deletor.go:79] Container "a74312df8bc76101bf0be5465bbbee698a053ed814e22b64d8d383a189e05729" not found in pod's containers
	* Mar 10 20:15:12 skaffold-20210310201235-6496 kubelet[3075]: W0310 20:15:12.665743    3075 pod_container_deletor.go:79] Container "e5d464cf8018e638f47f268cd014a7dc0f220b1242a271cdfec6c47ad092a294" not found in pod's containers
	* Mar 10 20:15:13 skaffold-20210310201235-6496 kubelet[3075]: W0310 20:15:13.746699    3075 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-wg9rq through plugin: invalid network status for
	* 
	* ==> storage-provisioner [27bd520fd319] <==
	* I0310 20:15:12.169192       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* F0310 20:15:33.182149       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|--------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                 |               Profile                |          User           | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------|--------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| start   | -p                                   | existing-network-20210310194026-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:40:27 GMT | Wed, 10 Mar 2021 19:43:11 GMT |
	|         | existing-network-20210310194026-6496 |                                      |                         |         |                               |                               |
	|         | --network=existing-network           |                                      |                         |         |                               |                               |
	| delete  | -p                                   | existing-network-20210310194026-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:43:11 GMT | Wed, 10 Mar 2021 19:43:22 GMT |
	|         | existing-network-20210310194026-6496 |                                      |                         |         |                               |                               |
	| start   | -p                                   | multinode-20210310194323-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:43:23 GMT | Wed, 10 Mar 2021 19:49:13 GMT |
	|         | multinode-20210310194323-6496        |                                      |                         |         |                               |                               |
	|         | --wait=true --memory=2200            |                                      |                         |         |                               |                               |
	|         | --nodes=2 -v=8                       |                                      |                         |         |                               |                               |
	|         | --alsologtostderr                    |                                      |                         |         |                               |                               |
	|         | --driver=docker                      |                                      |                         |         |                               |                               |
	| node    | add -p                               | multinode-20210310194323-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:49:19 GMT | Wed, 10 Mar 2021 19:51:08 GMT |
	|         | multinode-20210310194323-6496        |                                      |                         |         |                               |                               |
	|         | -v 3 --alsologtostderr               |                                      |                         |         |                               |                               |
	| profile | list --output json                   | minikube                             | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:51:15 GMT | Wed, 10 Mar 2021 19:51:17 GMT |
	| -p      | multinode-20210310194323-6496        | multinode-20210310194323-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:51:18 GMT | Wed, 10 Mar 2021 19:51:21 GMT |
	|         | node stop m03                        |                                      |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496        | multinode-20210310194323-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:51:33 GMT | Wed, 10 Mar 2021 19:52:13 GMT |
	|         | node start m03                       |                                      |                         |         |                               |                               |
	|         | --alsologtostderr                    |                                      |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496        | multinode-20210310194323-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:52:21 GMT | Wed, 10 Mar 2021 19:52:39 GMT |
	|         | node delete m03                      |                                      |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496        | multinode-20210310194323-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:52:44 GMT | Wed, 10 Mar 2021 19:53:02 GMT |
	|         | stop                                 |                                      |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496        | multinode-20210310194323-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:59:04 GMT | Wed, 10 Mar 2021 19:59:20 GMT |
	|         | logs -n 25                           |                                      |                         |         |                               |                               |
	| start   | -p                                   | multinode-20210310194323-6496-m03    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 19:59:27 GMT | Wed, 10 Mar 2021 20:02:27 GMT |
	|         | multinode-20210310194323-6496-m03    |                                      |                         |         |                               |                               |
	|         | --driver=docker                      |                                      |                         |         |                               |                               |
	| delete  | -p                                   | multinode-20210310194323-6496-m03    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:02:30 GMT | Wed, 10 Mar 2021 20:02:41 GMT |
	|         | multinode-20210310194323-6496-m03    |                                      |                         |         |                               |                               |
	| -p      | multinode-20210310194323-6496        | multinode-20210310194323-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:02:45 GMT | Wed, 10 Mar 2021 20:02:59 GMT |
	|         | logs -n 25                           |                                      |                         |         |                               |                               |
	| delete  | -p                                   | multinode-20210310194323-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:03:05 GMT | Wed, 10 Mar 2021 20:03:22 GMT |
	|         | multinode-20210310194323-6496        |                                      |                         |         |                               |                               |
	| start   | -p                                   | test-preload-20210310200323-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:03:23 GMT | Wed, 10 Mar 2021 20:06:49 GMT |
	|         | test-preload-20210310200323-6496     |                                      |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr      |                                      |                         |         |                               |                               |
	|         | --wait=true --preload=false          |                                      |                         |         |                               |                               |
	|         | --driver=docker                      |                                      |                         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0         |                                      |                         |         |                               |                               |
	| ssh     | -p                                   | test-preload-20210310200323-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:06:50 GMT | Wed, 10 Mar 2021 20:06:54 GMT |
	|         | test-preload-20210310200323-6496     |                                      |                         |         |                               |                               |
	|         | -- docker pull busybox               |                                      |                         |         |                               |                               |
	| start   | -p                                   | test-preload-20210310200323-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:06:54 GMT | Wed, 10 Mar 2021 20:08:51 GMT |
	|         | test-preload-20210310200323-6496     |                                      |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr      |                                      |                         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker     |                                      |                         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3         |                                      |                         |         |                               |                               |
	| ssh     | -p                                   | test-preload-20210310200323-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:08:51 GMT | Wed, 10 Mar 2021 20:08:54 GMT |
	|         | test-preload-20210310200323-6496     |                                      |                         |         |                               |                               |
	|         | -- docker images                     |                                      |                         |         |                               |                               |
	| delete  | -p                                   | test-preload-20210310200323-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:08:54 GMT | Wed, 10 Mar 2021 20:09:05 GMT |
	|         | test-preload-20210310200323-6496     |                                      |                         |         |                               |                               |
	| start   | -p                                   | scheduled-stop-20210310200905-6496   | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:09:06 GMT | Wed, 10 Mar 2021 20:11:51 GMT |
	|         | scheduled-stop-20210310200905-6496   |                                      |                         |         |                               |                               |
	|         | --memory=1900 --driver=docker        |                                      |                         |         |                               |                               |
	| stop    | -p                                   | scheduled-stop-20210310200905-6496   | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:52 GMT | Wed, 10 Mar 2021 20:11:54 GMT |
	|         | scheduled-stop-20210310200905-6496   |                                      |                         |         |                               |                               |
	|         | --schedule 5m                        |                                      |                         |         |                               |                               |
	| ssh     | -p                                   | scheduled-stop-20210310200905-6496   | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:57 GMT | Wed, 10 Mar 2021 20:11:59 GMT |
	|         | scheduled-stop-20210310200905-6496   |                                      |                         |         |                               |                               |
	|         | -- sudo systemctl show               |                                      |                         |         |                               |                               |
	|         | minikube-scheduled-stop --no-page    |                                      |                         |         |                               |                               |
	| stop    | -p                                   | scheduled-stop-20210310200905-6496   | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:00 GMT | Wed, 10 Mar 2021 20:12:02 GMT |
	|         | scheduled-stop-20210310200905-6496   |                                      |                         |         |                               |                               |
	|         | --schedule 5s                        |                                      |                         |         |                               |                               |
	| delete  | -p                                   | scheduled-stop-20210310200905-6496   | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:26 GMT | Wed, 10 Mar 2021 20:12:35 GMT |
	|         | scheduled-stop-20210310200905-6496   |                                      |                         |         |                               |                               |
	| start   | -p                                   | skaffold-20210310201235-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:37 GMT | Wed, 10 Mar 2021 20:15:24 GMT |
	|         | skaffold-20210310201235-6496         |                                      |                         |         |                               |                               |
	|         | --memory=2600 --driver=docker        |                                      |                         |         |                               |                               |
	|---------|--------------------------------------|--------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 20:12:37
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 20:12:37.454408    7892 out.go:239] Setting OutFile to fd 2532 ...
	* I0310 20:12:37.456409    7892 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 20:12:37.456409    7892 out.go:252] Setting ErrFile to fd 2600...
	* I0310 20:12:37.456409    7892 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 20:12:37.468405    7892 out.go:246] Setting JSON to false
	* I0310 20:12:37.470390    7892 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":32623,"bootTime":1615374534,"procs":106,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 20:12:37.470390    7892 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 20:12:37.479065    7892 out.go:129] * [skaffold-20210310201235-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 20:12:37.483051    7892 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 20:12:37.484562    7892 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 20:12:37.911381    7892 docker.go:119] docker version: linux-20.10.2
	* I0310 20:12:37.920558    7892 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:12:38.700354    7892 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:46 SystemTime:2021-03-10 20:12:38.34381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:12:38.703807    7892 out.go:129] * Using the docker driver based on user configuration
	* I0310 20:12:38.703807    7892 start.go:276] selected driver: docker
	* I0310 20:12:38.703807    7892 start.go:718] validating driver "docker" against <nil>
	* I0310 20:12:38.704129    7892 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 20:12:40.546379    7892 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:12:41.274650    7892 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:46 SystemTime:2021-03-10 20:12:40.9524704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:12:41.275477    7892 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 20:12:41.275804    7892 start_flags.go:699] Wait components to verify : map[apiserver:true system_pods:true]
	* I0310 20:12:41.275804    7892 cni.go:74] Creating CNI manager for ""
	* I0310 20:12:41.276097    7892 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 20:12:41.276097    7892 start_flags.go:398] config:
	* {Name:skaffold-20210310201235-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:skaffold-20210310201235-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 20:12:41.280107    7892 out.go:129] * Starting control plane node skaffold-20210310201235-6496 in cluster skaffold-20210310201235-6496
	* I0310 20:12:41.759120    7892 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 20:12:41.759120    7892 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 20:12:41.759509    7892 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 20:12:41.759509    7892 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 20:12:41.759509    7892 cache.go:54] Caching tarball of preloaded images
	* I0310 20:12:41.759873    7892 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 20:12:41.759873    7892 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 20:12:41.760870    7892 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\config.json ...
	* I0310 20:12:41.761160    7892 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\config.json: {Name:mk02c7b865fb6c8a81c8a5a8b4d6f277456e6902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:12:41.776906    7892 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 20:12:41.777493    7892 start.go:313] acquiring machines lock for skaffold-20210310201235-6496: {Name:mk8bf7de831f6e6ddbe2341194b01bcc2164fbaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:12:41.777820    7892 start.go:317] acquired machines lock for "skaffold-20210310201235-6496" in 327.5??s
	* I0310 20:12:41.777820    7892 start.go:89] Provisioning new machine with config: &{Name:skaffold-20210310201235-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:skaffold-20210310201235-6496 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 20:12:41.778225    7892 start.go:126] createHost starting for "" (driver="docker")
	* I0310 20:12:41.781571    7892 out.go:150] * Creating docker container (CPUs=2, Memory=2600MB) ...
	* I0310 20:12:41.782126    7892 start.go:160] libmachine.API.Create for "skaffold-20210310201235-6496" (driver="docker")
	* I0310 20:12:41.782510    7892 client.go:168] LocalClient.Create starting
	* I0310 20:12:41.783225    7892 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	* I0310 20:12:41.783527    7892 main.go:121] libmachine: Decoding PEM data...
	* I0310 20:12:41.783794    7892 main.go:121] libmachine: Parsing certificate...
	* I0310 20:12:41.784156    7892 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	* I0310 20:12:41.784464    7892 main.go:121] libmachine: Decoding PEM data...
	* I0310 20:12:41.784464    7892 main.go:121] libmachine: Parsing certificate...
	* I0310 20:12:41.805498    7892 cli_runner.go:115] Run: docker network inspect skaffold-20210310201235-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 20:12:42.275262    7892 cli_runner.go:162] docker network inspect skaffold-20210310201235-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 20:12:42.286688    7892 network_create.go:240] running [docker network inspect skaffold-20210310201235-6496] to gather additional debugging logs...
	* I0310 20:12:42.286688    7892 cli_runner.go:115] Run: docker network inspect skaffold-20210310201235-6496
	* W0310 20:12:42.741858    7892 cli_runner.go:162] docker network inspect skaffold-20210310201235-6496 returned with exit code 1
	* I0310 20:12:42.742263    7892 network_create.go:243] error running [docker network inspect skaffold-20210310201235-6496]: docker network inspect skaffold-20210310201235-6496: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: skaffold-20210310201235-6496
	* I0310 20:12:42.742263    7892 network_create.go:245] output of [docker network inspect skaffold-20210310201235-6496]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: skaffold-20210310201235-6496
	* 
	* ** /stderr **
	* I0310 20:12:42.758602    7892 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 20:12:43.230467    7892 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 20:12:43.231625    7892 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: skaffold-20210310201235-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 20:12:43.240255    7892 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20210310201235-6496
	* I0310 20:12:43.849602    7892 kic.go:102] calculated static IP "192.168.49.97" for the "skaffold-20210310201235-6496" container
	* I0310 20:12:43.866007    7892 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 20:12:44.327866    7892 cli_runner.go:115] Run: docker volume create skaffold-20210310201235-6496 --label name.minikube.sigs.k8s.io=skaffold-20210310201235-6496 --label created_by.minikube.sigs.k8s.io=true
	* I0310 20:12:44.787951    7892 oci.go:102] Successfully created a docker volume skaffold-20210310201235-6496
	* I0310 20:12:44.796477    7892 cli_runner.go:115] Run: docker run --rm --name skaffold-20210310201235-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20210310201235-6496 --entrypoint /usr/bin/test -v skaffold-20210310201235-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 20:12:46.664073    7892 cli_runner.go:168] Completed: docker run --rm --name skaffold-20210310201235-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20210310201235-6496 --entrypoint /usr/bin/test -v skaffold-20210310201235-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (1.8676093s)
	* I0310 20:12:46.664073    7892 oci.go:106] Successfully prepared a docker volume skaffold-20210310201235-6496
	* I0310 20:12:46.664073    7892 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 20:12:46.664799    7892 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 20:12:46.664799    7892 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 20:12:46.675191    7892 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 20:12:46.683589    7892 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-20210310201235-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* W0310 20:12:47.236379    7892 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-20210310201235-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	* I0310 20:12:47.236379    7892 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-20210310201235-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	* stdout:
	* 
	* stderr:
	* docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	* 
	* The notification platform is unavailable.
	* 	���
	* 
	* ���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	*    at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	* �������?8
	* CreateToastNotifier
	* Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	* Windows.UI.Notifications.ToastNotificationManager
	* Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	* ���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	* ���+The notification platform is unavailable.
	* 	������������RestrictedErrorReference
	* 	
���
���������RestrictedCapabilitySid
	* 	������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	* See 'docker run --help'.
	* I0310 20:12:47.454050    7892 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-03-10 20:12:47.1065778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 20:12:47.463494    7892 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 20:12:48.208125    7892 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-20210310201235-6496 --name skaffold-20210310201235-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20210310201235-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-20210310201235-6496 --network skaffold-20210310201235-6496 --ip 192.168.49.97 --volume skaffold-20210310201235-6496:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 20:12:49.958962    7892 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-20210310201235-6496 --name skaffold-20210310201235-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20210310201235-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-20210310201235-6496 --network skaffold-20210310201235-6496 --ip 192.168.49.97 --volume skaffold-20210310201235-6496:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (1.7508485s)
	* I0310 20:12:49.969301    7892 cli_runner.go:115] Run: docker container inspect skaffold-20210310201235-6496 --format=
	* I0310 20:12:50.485516    7892 cli_runner.go:115] Run: docker container inspect skaffold-20210310201235-6496 --format=
	* I0310 20:12:51.002857    7892 cli_runner.go:115] Run: docker exec skaffold-20210310201235-6496 stat /var/lib/dpkg/alternatives/iptables
	* I0310 20:12:51.782069    7892 oci.go:278] the created container "skaffold-20210310201235-6496" has a running status.
	* I0310 20:12:51.782069    7892 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa...
	* I0310 20:12:52.008184    7892 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 20:12:52.852371    7892 cli_runner.go:115] Run: docker container inspect skaffold-20210310201235-6496 --format=
	* I0310 20:12:53.338262    7892 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	* I0310 20:12:53.338262    7892 kic_runner.go:115] Args: [docker exec --privileged skaffold-20210310201235-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	* I0310 20:12:53.989606    7892 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa...
	* I0310 20:12:54.677053    7892 cli_runner.go:115] Run: docker container inspect skaffold-20210310201235-6496 --format=
	* I0310 20:12:55.138035    7892 machine.go:88] provisioning docker machine ...
	* I0310 20:12:55.138661    7892 ubuntu.go:169] provisioning hostname "skaffold-20210310201235-6496"
	* I0310 20:12:55.148633    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:12:55.622014    7892 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:12:55.632345    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}
	* I0310 20:12:55.632345    7892 main.go:121] libmachine: About to run SSH command:
	* sudo hostname skaffold-20210310201235-6496 && echo "skaffold-20210310201235-6496" | sudo tee /etc/hostname
	* I0310 20:12:55.907387    7892 main.go:121] libmachine: SSH cmd err, output: <nil>: skaffold-20210310201235-6496
	* 
	* I0310 20:12:55.920350    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:12:56.382066    7892 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:12:56.382756    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}
	* I0310 20:12:56.382756    7892 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\sskaffold-20210310201235-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-20210310201235-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 skaffold-20210310201235-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 20:12:56.598190    7892 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 20:12:56.598553    7892 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 20:12:56.598553    7892 ubuntu.go:177] setting up certificates
	* I0310 20:12:56.598553    7892 provision.go:83] configureAuth start
	* I0310 20:12:56.612194    7892 cli_runner.go:115] Run: docker container inspect -f "" skaffold-20210310201235-6496
	* I0310 20:12:57.067794    7892 provision.go:137] copyHostCerts
	* I0310 20:12:57.068616    7892 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 20:12:57.068616    7892 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 20:12:57.069347    7892 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 20:12:57.072615    7892 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 20:12:57.072615    7892 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 20:12:57.073143    7892 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 20:12:57.076071    7892 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 20:12:57.076071    7892 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 20:12:57.076898    7892 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 20:12:57.079525    7892 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.skaffold-20210310201235-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube skaffold-20210310201235-6496]
	* I0310 20:12:57.288739    7892 provision.go:165] copyRemoteCerts
	* I0310 20:12:57.299030    7892 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 20:12:57.306853    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:12:57.751907    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:12:57.895088    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 20:12:57.949860    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	* I0310 20:12:58.012460    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 20:12:58.067626    7892 provision.go:86] duration metric: configureAuth took 1.4690828s
	* I0310 20:12:58.067626    7892 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 20:12:58.076509    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:12:58.546506    7892 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:12:58.546506    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}
	* I0310 20:12:58.546506    7892 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 20:12:58.775363    7892 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 20:12:58.775363    7892 ubuntu.go:71] root file system type: overlay
	* I0310 20:12:58.775858    7892 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 20:12:58.784425    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:12:59.244437    7892 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:12:59.245006    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}
	* I0310 20:12:59.245006    7892 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 20:12:59.482032    7892 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 20:12:59.490271    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:12:59.953170    7892 main.go:121] libmachine: Using SSH client type: native
	* I0310 20:12:59.953652    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}
	* I0310 20:12:59.953652    7892 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 20:13:01.565199    7892 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 20:12:59.476151000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 20:13:01.565199    7892 machine.go:91] provisioned docker machine in 6.4272059s
	* I0310 20:13:01.565199    7892 client.go:171] LocalClient.Create took 19.7828208s
	* I0310 20:13:01.565199    7892 start.go:168] duration metric: libmachine.API.Create for "skaffold-20210310201235-6496" took 19.7832041s
	* I0310 20:13:01.565199    7892 start.go:267] post-start starting for "skaffold-20210310201235-6496" (driver="docker")
	* I0310 20:13:01.565199    7892 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 20:13:01.570037    7892 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 20:13:01.582670    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:13:02.056749    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:13:02.225974    7892 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 20:13:02.243256    7892 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 20:13:02.243256    7892 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 20:13:02.243256    7892 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 20:13:02.243256    7892 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 20:13:02.243658    7892 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 20:13:02.243658    7892 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 20:13:02.246211    7892 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 20:13:02.247344    7892 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 20:13:02.260741    7892 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 20:13:02.289689    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 20:13:02.345944    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 20:13:02.402490    7892 start.go:270] post-start completed in 837.2964ms
	* I0310 20:13:02.428181    7892 cli_runner.go:115] Run: docker container inspect -f "" skaffold-20210310201235-6496
	* I0310 20:13:02.902304    7892 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\config.json ...
	* I0310 20:13:02.929147    7892 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 20:13:02.937412    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:13:03.399700    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:13:03.554243    7892 start.go:129] duration metric: createHost completed in 21.7761629s
	* I0310 20:13:03.554243    7892 start.go:80] releasing machines lock for "skaffold-20210310201235-6496", held for 21.7765672s
	* I0310 20:13:03.570802    7892 cli_runner.go:115] Run: docker container inspect -f "" skaffold-20210310201235-6496
	* I0310 20:13:04.059761    7892 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 20:13:04.066831    7892 ssh_runner.go:149] Run: systemctl --version
	* I0310 20:13:04.073262    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:13:04.074269    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:13:04.559841    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:13:04.590143    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:13:04.810855    7892 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 20:13:04.866788    7892 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 20:13:04.906182    7892 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 20:13:04.915586    7892 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 20:13:04.954076    7892 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 20:13:05.022270    7892 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 20:13:05.073769    7892 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 20:13:05.217137    7892 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 20:13:05.260680    7892 ssh_runner.go:149] Run: docker version --format 
	* I0310 20:13:05.445048    7892 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 20:13:05.454281    7892 cli_runner.go:115] Run: docker exec -t skaffold-20210310201235-6496 dig +short host.docker.internal
	* I0310 20:13:06.157639    7892 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 20:13:06.168203    7892 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 20:13:06.190414    7892 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 20:13:06.233425    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:13:06.701153    7892 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\client.crt
	* I0310 20:13:06.704163    7892 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\client.key
	* I0310 20:13:06.707547    7892 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 20:13:06.707547    7892 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 20:13:06.716084    7892 ssh_runner.go:149] Run: docker images --format :
	* I0310 20:13:06.811444    7892 docker.go:423] Got preloaded images: 
	* I0310 20:13:06.811444    7892 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	* I0310 20:13:06.825905    7892 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 20:13:06.867999    7892 ssh_runner.go:149] Run: which lz4
	* I0310 20:13:06.896335    7892 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 20:13:06.912548    7892 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 20:13:06.912548    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	* I0310 20:13:54.630667    7892 docker.go:388] Took 47.745844 seconds to copy over tarball
	* I0310 20:13:54.646626    7892 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	* I0310 20:14:01.067063    7892 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (6.4204716s)
	* I0310 20:14:01.067063    7892 ssh_runner.go:100] rm: /preloaded.tar.lz4
	* I0310 20:14:01.360822    7892 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 20:14:01.397264    7892 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	* I0310 20:14:01.460193    7892 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 20:14:01.636558    7892 ssh_runner.go:149] Run: sudo systemctl restart docker
	* I0310 20:14:07.456969    7892 ssh_runner.go:189] Completed: sudo systemctl restart docker: (5.8193956s)
	* I0310 20:14:07.473487    7892 ssh_runner.go:149] Run: docker images --format :
	* I0310 20:14:07.626812    7892 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 20:14:07.626812    7892 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 20:14:07.639005    7892 ssh_runner.go:149] Run: docker info --format 
	* I0310 20:14:07.903012    7892 cni.go:74] Creating CNI manager for ""
	* I0310 20:14:07.903012    7892 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 20:14:07.903309    7892 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 20:14:07.903545    7892 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.97 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-20210310201235-6496 NodeName:skaffold-20210310201235-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.97 CgroupDriver:cgroupfs ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 20:14:07.904911    7892 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 192.168.49.97
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "skaffold-20210310201235-6496"
	*   kubeletExtraArgs:
	*     node-ip: 192.168.49.97
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "192.168.49.97"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 20:14:07.906889    7892 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=skaffold-20210310201235-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.97
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:skaffold-20210310201235-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	* I0310 20:14:07.922432    7892 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 20:14:07.954042    7892 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 20:14:07.964579    7892 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 20:14:07.990584    7892 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	* I0310 20:14:08.036058    7892 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 20:14:08.079924    7892 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1863 bytes)
	* I0310 20:14:08.146559    7892 ssh_runner.go:149] Run: grep 192.168.49.97	control-plane.minikube.internal$ /etc/hosts
	* I0310 20:14:08.167588    7892 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 20:14:08.200635    7892 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496 for IP: 192.168.49.97
	* I0310 20:14:08.200894    7892 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 20:14:08.201337    7892 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 20:14:08.202072    7892 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\client.key
	* I0310 20:14:08.202072    7892 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.key.b6188fac
	* I0310 20:14:08.202072    7892 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.crt.b6188fac with IP's: [192.168.49.97 10.96.0.1 127.0.0.1 10.0.0.1]
	* I0310 20:14:08.380034    7892 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.crt.b6188fac ...
	* I0310 20:14:08.381037    7892 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.crt.b6188fac: {Name:mkf353156c3287a6340f04f5dc72089022aa33d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:14:08.405958    7892 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.key.b6188fac ...
	* I0310 20:14:08.405958    7892 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.key.b6188fac: {Name:mkbe509633bfe80baef60a0b47c767a7ecf8fbb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:14:08.424947    7892 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.crt.b6188fac -> C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.crt
	* I0310 20:14:08.430641    7892 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.key.b6188fac -> C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.key
	* I0310 20:14:08.434846    7892 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\proxy-client.key
	* I0310 20:14:08.435067    7892 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\proxy-client.crt with IP's: []
	* I0310 20:14:08.825294    7892 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\proxy-client.crt ...
	* I0310 20:14:08.825294    7892 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\proxy-client.crt: {Name:mkc9476a12cea58caa9b47550258b78159939279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:14:08.841723    7892 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\proxy-client.key ...
	* I0310 20:14:08.841935    7892 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\proxy-client.key: {Name:mk0ed4775ea2f84108d48805cce59c109419bbf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:14:08.854799    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 20:14:08.854799    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.854799    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 20:14:08.855425    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.855425    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 20:14:08.855729    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.855729    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 20:14:08.855729    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.855729    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 20:14:08.855729    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.855729    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 20:14:08.856564    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.856564    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 20:14:08.856564    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.856564    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 20:14:08.856564    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.856564    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 20:14:08.856564    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.856564    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 20:14:08.857591    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.857591    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 20:14:08.857591    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.857591    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 20:14:08.857591    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.857591    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 20:14:08.857591    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.858609    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 20:14:08.858609    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.858609    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 20:14:08.858609    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.858609    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 20:14:08.858609    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.858609    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 20:14:08.859567    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.859567    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 20:14:08.859567    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.859567    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 20:14:08.859567    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.859567    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 20:14:08.859567    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.859567    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 20:14:08.860590    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.860590    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 20:14:08.860590    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.860590    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 20:14:08.860590    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.860590    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 20:14:08.860590    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.861617    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 20:14:08.861617    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.861617    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 20:14:08.861617    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.861617    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 20:14:08.861617    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.861617    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 20:14:08.862566    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.862566    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 20:14:08.862566    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.862566    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 20:14:08.862566    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.862566    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 20:14:08.862566    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.862566    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 20:14:08.863621    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.863621    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 20:14:08.863621    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.863621    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 20:14:08.863621    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.863621    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 20:14:08.863621    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.863621    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 20:14:08.864578    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.864578    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 20:14:08.864578    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.864578    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 20:14:08.864578    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.864578    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 20:14:08.864578    7892 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 20:14:08.865643    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 20:14:08.865643    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 20:14:08.865643    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 20:14:08.865643    7892 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 20:14:08.871568    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 20:14:08.937472    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	* I0310 20:14:08.996038    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 20:14:09.056495    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\skaffold-20210310201235-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	* I0310 20:14:09.116608    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 20:14:09.178339    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 20:14:09.234632    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 20:14:09.288768    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 20:14:09.350238    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 20:14:09.406415    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 20:14:09.465724    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 20:14:09.519128    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 20:14:09.573957    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 20:14:09.632041    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 20:14:09.691443    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 20:14:09.746281    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 20:14:09.801440    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 20:14:09.856662    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 20:14:09.916168    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 20:14:10.013943    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 20:14:10.078242    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 20:14:10.135482    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 20:14:10.190774    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 20:14:10.248574    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 20:14:10.310896    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 20:14:10.365833    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 20:14:10.422257    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 20:14:10.479055    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 20:14:10.543163    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 20:14:10.595246    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 20:14:10.651854    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 20:14:10.708715    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 20:14:10.773856    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 20:14:10.833237    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 20:14:10.891959    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 20:14:10.953111    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 20:14:11.008173    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 20:14:11.069185    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 20:14:11.125008    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 20:14:11.179210    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 20:14:11.237027    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 20:14:11.295785    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 20:14:11.355753    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 20:14:11.417248    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 20:14:11.475319    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 20:14:11.530011    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 20:14:11.586941    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 20:14:11.647412    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 20:14:11.705158    7892 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 20:14:11.755651    7892 ssh_runner.go:149] Run: openssl version
	* I0310 20:14:11.794459    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 20:14:11.836679    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 20:14:11.853110    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 20:14:11.864911    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 20:14:11.899276    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:11.955584    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 20:14:12.004928    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 20:14:12.021542    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 20:14:12.043642    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 20:14:12.079564    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:12.119074    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 20:14:12.167866    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 20:14:12.187789    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 20:14:12.203593    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 20:14:12.237171    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:12.276423    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 20:14:12.321123    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 20:14:12.339485    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 20:14:12.362001    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 20:14:12.398730    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:12.436152    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 20:14:12.492291    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 20:14:12.508167    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 20:14:12.529110    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 20:14:12.563676    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:12.599822    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 20:14:12.635122    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 20:14:12.653182    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 20:14:12.666615    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 20:14:12.706568    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:12.747442    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 20:14:12.787870    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 20:14:12.805673    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 20:14:12.824031    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 20:14:12.857361    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:12.898359    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 20:14:12.935851    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 20:14:12.953132    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 20:14:12.971105    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 20:14:13.006113    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:13.057358    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 20:14:13.094118    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 20:14:13.112332    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 20:14:13.125605    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 20:14:13.161669    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:13.204212    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 20:14:13.255196    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 20:14:13.271225    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 20:14:13.280196    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 20:14:13.312183    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:13.349260    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 20:14:13.388874    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 20:14:13.410157    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 20:14:13.422738    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 20:14:13.469321    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:13.505386    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 20:14:13.548819    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 20:14:13.570816    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 20:14:13.585340    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 20:14:13.622081    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:13.666726    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 20:14:13.706245    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 20:14:13.723075    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 20:14:13.734606    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 20:14:13.765780    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:13.804117    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 20:14:13.842627    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 20:14:13.858877    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 20:14:13.868880    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 20:14:13.910173    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:13.953012    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 20:14:13.993891    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:14:14.014195    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:14:14.025713    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 20:14:14.066024    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 20:14:14.116461    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 20:14:14.153199    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 20:14:14.171409    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 20:14:14.181674    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 20:14:14.213401    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:14.255842    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 20:14:14.305121    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 20:14:14.324426    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 20:14:14.338804    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 20:14:14.375336    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:14.415219    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 20:14:14.456281    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 20:14:14.470562    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 20:14:14.480171    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 20:14:14.525656    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:14.566530    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 20:14:14.609245    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 20:14:14.626890    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 20:14:14.647013    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 20:14:14.680795    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:14.716824    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 20:14:14.766682    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 20:14:14.782082    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 20:14:14.793186    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 20:14:14.825595    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:14.863210    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 20:14:14.899010    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 20:14:14.914954    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 20:14:14.932719    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 20:14:14.971019    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:15.015142    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 20:14:15.056138    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 20:14:15.071141    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 20:14:15.083100    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 20:14:15.127232    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:15.171151    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 20:14:15.218572    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 20:14:15.237419    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 20:14:15.250398    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 20:14:15.285265    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:15.337887    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 20:14:15.388176    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 20:14:15.411199    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 20:14:15.420206    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 20:14:15.462929    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:15.504541    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 20:14:15.543376    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 20:14:15.562408    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 20:14:15.574063    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 20:14:15.613457    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:15.649170    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 20:14:15.685165    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 20:14:15.705598    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 20:14:15.715003    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 20:14:15.750768    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:15.792129    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 20:14:15.832358    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 20:14:15.848906    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 20:14:15.860824    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 20:14:15.892862    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:15.945698    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 20:14:15.989462    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 20:14:16.010197    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 20:14:16.020001    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 20:14:16.055269    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:16.091514    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 20:14:16.126193    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 20:14:16.144674    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 20:14:16.154832    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 20:14:16.188121    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:16.225969    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 20:14:16.268671    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 20:14:16.287288    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 20:14:16.298714    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 20:14:16.330581    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:16.370958    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 20:14:16.408909    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 20:14:16.426295    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 20:14:16.437441    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 20:14:16.469704    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:16.508642    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 20:14:16.550477    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 20:14:16.570891    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 20:14:16.582048    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 20:14:16.612756    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:16.660920    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 20:14:16.701604    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 20:14:16.722393    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 20:14:16.743691    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 20:14:16.784626    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:16.836746    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 20:14:16.874703    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 20:14:16.891594    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 20:14:16.901130    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 20:14:16.933771    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:16.973565    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 20:14:17.010025    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 20:14:17.029031    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 20:14:17.041729    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 20:14:17.073090    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:17.113389    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 20:14:17.156663    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 20:14:17.172504    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 20:14:17.183223    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 20:14:17.217048    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:17.266041    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 20:14:17.304835    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 20:14:17.322549    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 20:14:17.335431    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 20:14:17.366604    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:17.405707    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 20:14:17.443612    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 20:14:17.459107    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 20:14:17.468967    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 20:14:17.506004    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:17.544933    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 20:14:17.585264    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 20:14:17.604029    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 20:14:17.619447    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 20:14:17.653682    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:17.703643    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 20:14:17.743520    7892 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 20:14:17.761756    7892 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 20:14:17.782639    7892 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 20:14:17.830618    7892 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 20:14:17.857629    7892 kubeadm.go:385] StartCluster: {Name:skaffold-20210310201235-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:skaffold-20210310201235-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 20:14:17.874497    7892 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 20:14:18.004383    7892 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 20:14:18.061411    7892 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 20:14:18.094006    7892 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 20:14:18.110131    7892 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 20:14:18.142280    7892 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 20:14:18.142435    7892 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 20:14:46.933718    7892 out.go:150]   - Generating certificates and keys ...
	* I0310 20:14:46.941946    7892 out.go:150]   - Booting up control plane ...
	* I0310 20:14:46.947234    7892 out.go:150]   - Configuring RBAC rules ...
	* I0310 20:14:46.950933    7892 cni.go:74] Creating CNI manager for ""
	* I0310 20:14:46.950933    7892 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 20:14:46.950933    7892 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	* I0310 20:14:46.961825    7892 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=skaffold-20210310201235-6496 minikube.k8s.io/updated_at=2021_03_10T20_14_46_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 20:14:46.966859    7892 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	* I0310 20:14:47.060717    7892 ops.go:34] apiserver oom_adj: -16
	* I0310 20:14:48.560467    7892 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=skaffold-20210310201235-6496 minikube.k8s.io/updated_at=2021_03_10T20_14_46_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.5986492s)
	* I0310 20:14:48.560467    7892 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.5936153s)
	* I0310 20:14:48.560467    7892 kubeadm.go:995] duration metric: took 1.6095411s to wait for elevateKubeSystemPrivileges.
	* I0310 20:14:48.560467    7892 kubeadm.go:387] StartCluster complete in 30.7029874s
	* I0310 20:14:48.560725    7892 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:14:48.561038    7892 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	* I0310 20:14:48.566553    7892 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 20:14:48.686290    7892 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "skaffold-20210310201235-6496" rescaled to 1
	* I0310 20:14:48.686816    7892 addons.go:381] enableAddons start: toEnable=map[], additional=[]
	* I0310 20:14:48.686816    7892 start.go:203] Will wait 6m0s for node up to 
	* I0310 20:14:48.686816    7892 addons.go:58] Setting storage-provisioner=true in profile "skaffold-20210310201235-6496"
	* I0310 20:14:48.686816    7892 addons.go:134] Setting addon storage-provisioner=true in "skaffold-20210310201235-6496"
	* W0310 20:14:48.686816    7892 addons.go:143] addon storage-provisioner should already be in state true
	* I0310 20:14:48.687122    7892 addons.go:58] Setting default-storageclass=true in profile "skaffold-20210310201235-6496"
	* I0310 20:14:48.687122    7892 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "skaffold-20210310201235-6496"
	* I0310 20:14:48.695631    7892 out.go:129] * Verifying Kubernetes components...
	* I0310 20:14:48.687416    7892 host.go:66] Checking if "skaffold-20210310201235-6496" exists ...
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 20:14:48.688813    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 20:14:48.688525    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 20:14:48.742996    7892 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	* I0310 20:14:48.749303    7892 cli_runner.go:115] Run: docker container inspect skaffold-20210310201235-6496 --format=
	* I0310 20:14:48.797505    7892 cli_runner.go:115] Run: docker container inspect skaffold-20210310201235-6496 --format=
	* I0310 20:14:49.002155    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:49.795455    7892 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.796660    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	* I0310 20:14:49.797341    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.0929475s
	* I0310 20:14:49.797696    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	* I0310 20:14:49.821771    7892 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.822828    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	* I0310 20:14:49.823706    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.0809322s
	* I0310 20:14:49.823706    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	* I0310 20:14:49.831148    7892 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.831799    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	* I0310 20:14:49.832269    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 1.1288671s
	* I0310 20:14:49.832269    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	* I0310 20:14:49.867045    7892 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.868649    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	* I0310 20:14:49.869197    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 1.1377748s
	* I0310 20:14:49.869197    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	* I0310 20:14:49.880350    7892 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.880705    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	* I0310 20:14:49.881233    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.1634149s
	* I0310 20:14:49.881233    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	* I0310 20:14:49.885599    7892 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.886180    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	* I0310 20:14:49.886332    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.1907069s
	* I0310 20:14:49.886332    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	* I0310 20:14:49.888650    7892 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.889291    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	* I0310 20:14:49.889742    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.1577888s
	* I0310 20:14:49.889742    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	* I0310 20:14:49.943079    7892 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.943412    7892 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.943850    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	* I0310 20:14:49.944067    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	* I0310 20:14:49.944342    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.2478087s
	* I0310 20:14:49.944342    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	* I0310 20:14:49.945377    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.2442702s
	* I0310 20:14:49.945377    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	* I0310 20:14:49.959238    7892 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.960018    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	* I0310 20:14:49.960883    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 1.2466879s
	* I0310 20:14:49.960883    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	* I0310 20:14:49.973477    7892 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.973477    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	* I0310 20:14:49.973477    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.2600685s
	* I0310 20:14:49.973477    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	* I0310 20:14:49.978378    7892 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:49.978873    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	* I0310 20:14:49.979088    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 1.2451141s
	* I0310 20:14:49.979088    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	* I0310 20:14:50.000961    7892 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.001247    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	* I0310 20:14:50.002222    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.2875006s
	* I0310 20:14:50.002222    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	* I0310 20:14:50.009702    7892 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.010356    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	* I0310 20:14:50.010917    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.3075154s
	* I0310 20:14:50.010917    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	* I0310 20:14:50.033389    7892 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.033902    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	* I0310 20:14:50.034246    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.328035s
	* I0310 20:14:50.034246    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	* I0310 20:14:50.043525    7892 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.043871    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	* I0310 20:14:50.044558    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.3401661s
	* I0310 20:14:50.044731    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	* I0310 20:14:50.057126    7892 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.057815    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	* I0310 20:14:50.058394    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.3269729s
	* I0310 20:14:50.058394    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	* I0310 20:14:50.066893    7892 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.067799    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	* I0310 20:14:50.068632    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.366464s
	* I0310 20:14:50.068632    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	* I0310 20:14:50.070900    7892 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.071997    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	* I0310 20:14:50.072242    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.3575203s
	* I0310 20:14:50.072242    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	* I0310 20:14:50.073539    7892 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.073876    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	* I0310 20:14:50.074574    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.3414258s
	* I0310 20:14:50.074574    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	* I0310 20:14:50.085836    7892 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.086403    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	* I0310 20:14:50.087132    7892 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.087526    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	* I0310 20:14:50.087526    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.383505s
	* I0310 20:14:50.087879    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	* I0310 20:14:50.088206    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.3506558s
	* I0310 20:14:50.088421    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	* I0310 20:14:50.097510    7892 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.098832    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	* I0310 20:14:50.099140    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.3643043s
	* I0310 20:14:50.099140    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	* I0310 20:14:50.108628    7892 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.109083    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	* I0310 20:14:50.109083    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.4075498s
	* I0310 20:14:50.109083    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	* I0310 20:14:50.120481    7892 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.120946    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	* I0310 20:14:50.121868    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.4171533s
	* I0310 20:14:50.121868    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	* I0310 20:14:50.124222    7892 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.124222    7892 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.124472    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	* I0310 20:14:50.124472    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	* I0310 20:14:50.124763    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 1.4195545s
	* I0310 20:14:50.124763    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	* I0310 20:14:50.124763    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.4095201s
	* I0310 20:14:50.124763    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	* I0310 20:14:50.132168    7892 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.133365    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	* I0310 20:14:50.133832    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.437268s
	* I0310 20:14:50.133832    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	* I0310 20:14:50.135170    7892 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.135751    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	* I0310 20:14:50.136112    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.3977138s
	* I0310 20:14:50.136112    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	* I0310 20:14:50.138834    7892 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.139601    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	* I0310 20:14:50.139601    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.4020511s
	* I0310 20:14:50.139601    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	* I0310 20:14:50.140604    7892 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.141293    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	* I0310 20:14:50.141756    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.4451922s
	* I0310 20:14:50.141756    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	* I0310 20:14:50.145594    7892 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.146077    7892 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.146391    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	* I0310 20:14:50.146391    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 1.4467778s
	* I0310 20:14:50.146391    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	* I0310 20:14:50.146746    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	* I0310 20:14:50.146978    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.4491777s
	* I0310 20:14:50.146978    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	* I0310 20:14:50.148164    7892 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 20:14:50.148771    7892 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	* I0310 20:14:50.149148    7892 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.4528907s
	* I0310 20:14:50.149148    7892 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	* I0310 20:14:50.149148    7892 cache.go:73] Successfully saved all images to host disk.
	* I0310 20:14:50.169026    7892 cli_runner.go:115] Run: docker container inspect skaffold-20210310201235-6496 --format=
	* I0310 20:14:50.458062    7892 cli_runner.go:168] Completed: docker container inspect skaffold-20210310201235-6496 --format=: (1.7087672s)
	* I0310 20:14:50.463484    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.461336s)
	* I0310 20:14:50.484655    7892 cli_runner.go:168] Completed: docker container inspect skaffold-20210310201235-6496 --format=: (1.6871572s)
	* I0310 20:14:50.488408    7892 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 20:14:50.488408    7892 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	* I0310 20:14:50.488408    7892 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	* I0310 20:14:50.500859    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:50.500859    7892 api_server.go:48] waiting for apiserver process to appear ...
	* I0310 20:14:50.504120    7892 addons.go:134] Setting addon default-storageclass=true in "skaffold-20210310201235-6496"
	* W0310 20:14:50.504120    7892 addons.go:143] addon default-storageclass should already be in state true
	* I0310 20:14:50.504380    7892 host.go:66] Checking if "skaffold-20210310201235-6496" exists ...
	* I0310 20:14:50.518915    7892 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 20:14:50.529144    7892 cli_runner.go:115] Run: docker container inspect skaffold-20210310201235-6496 --format=
	* I0310 20:14:50.595553    7892 api_server.go:68] duration metric: took 1.9087458s to wait for apiserver process to appear ...
	* I0310 20:14:50.595553    7892 api_server.go:84] waiting for apiserver healthz status ...
	* I0310 20:14:50.595553    7892 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55071/healthz ...
	* I0310 20:14:50.668261    7892 api_server.go:241] https://127.0.0.1:55071/healthz returned 200:
	* ok
	* I0310 20:14:50.677708    7892 api_server.go:137] control plane version: v1.20.2
	* I0310 20:14:50.677708    7892 api_server.go:127] duration metric: took 82.1551ms to wait for apiserver health ...
	* I0310 20:14:50.677708    7892 system_pods.go:41] waiting for kube-system pods to appear ...
	* I0310 20:14:50.711481    7892 system_pods.go:57] 0 kube-system pods found
	* I0310 20:14:50.711655    7892 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
	* I0310 20:14:50.814037    7892 ssh_runner.go:149] Run: docker images --format :
	* I0310 20:14:50.821852    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:50.988650    7892 system_pods.go:57] 0 kube-system pods found
	* I0310 20:14:50.989497    7892 retry.go:31] will retry after 381.329545ms: only 0 pod(s) have shown up
	* I0310 20:14:51.085608    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:51.094654    7892 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	* I0310 20:14:51.094654    7892 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	* I0310 20:14:51.103046    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:51.345828    7892 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	* I0310 20:14:51.382413    7892 system_pods.go:57] 0 kube-system pods found
	* I0310 20:14:51.382413    7892 retry.go:31] will retry after 422.765636ms: only 0 pod(s) have shown up
	* I0310 20:14:51.419775    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:51.695826    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:51.839977    7892 system_pods.go:57] 0 kube-system pods found
	* I0310 20:14:51.839977    7892 retry.go:31] will retry after 473.074753ms: only 0 pod(s) have shown up
	* I0310 20:14:52.186559    7892 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	* I0310 20:14:52.353465    7892 system_pods.go:57] 4 kube-system pods found
	* I0310 20:14:52.353601    7892 system_pods.go:59] "etcd-skaffold-20210310201235-6496" [ffbe7585-067a-453d-94b2-0a608e8e3ed2] Pending
	* I0310 20:14:52.353601    7892 system_pods.go:59] "kube-apiserver-skaffold-20210310201235-6496" [10a55075-3729-466a-87d4-dba6a8d85cdd] Pending
	* I0310 20:14:52.353601    7892 system_pods.go:59] "kube-controller-manager-skaffold-20210310201235-6496" [f16a4ec5-7045-4978-b138-d8d7a60ab325] Pending
	* I0310 20:14:52.353601    7892 system_pods.go:59] "kube-scheduler-skaffold-20210310201235-6496" [d8ff71d4-eb38-4337-be86-612142c21397] Pending
	* I0310 20:14:52.353601    7892 system_pods.go:72] duration metric: took 1.6759006s to wait for pod list to return data ...
	* I0310 20:14:52.353601    7892 kubeadm.go:541] duration metric: took 3.6668015s to wait for : map[apiserver:true system_pods:true] ...
	* I0310 20:14:52.353601    7892 node_conditions.go:101] verifying NodePressure condition ...
	* I0310 20:14:52.368806    7892 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	* I0310 20:14:52.368806    7892 node_conditions.go:122] node cpu capacity is 4
	* I0310 20:14:52.368806    7892 node_conditions.go:104] duration metric: took 15.2052ms to run NodePressure ...
	* I0310 20:14:52.368958    7892 start.go:208] waiting for startup goroutines ...
	* I0310 20:14:52.983629    7892 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.6378091s)
	* I0310 20:14:52.983932    7892 ssh_runner.go:189] Completed: docker images --format :: (2.169734s)
	* I0310 20:14:52.983932    7892 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 20:14:52.983932    7892 docker.go:429] minikube-local-cache-test:functional-20210115023213-8464 wasn't preloaded
	* I0310 20:14:52.983932    7892 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210213143925-7440 minikube-local-cache-test:functional-20210225231842-5736 minikube-local-cache-test:functional-20210304184021-4052 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210126212539-5172 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210107190945-8748 minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210114204234-6692 minikube-local-cache-test:functional-20210119220838-6552 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210128021318-232 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:functio
nal-20210106002159-6856 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210310191609-6496 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210303214129-4588 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210308233820-5396 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210304002630-1156]
	* I0310 20:14:53.029176    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210310083645-5040
	* I0310 20:14:53.061057    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210126212539-5172
	* I0310 20:14:53.096288    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210219145454-9520
	* I0310 20:14:53.099395    7892 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	* I0310 20:14:53.115459    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210225231842-5736
	* I0310 20:14:53.120460    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120175851-7432
	* I0310 20:14:53.124887    7892 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	* I0310 20:14:53.156379    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210123004019-5372
	* I0310 20:14:53.167378    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210303214129-4588
	* I0310 20:14:53.217206    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210115191024-3516
	* I0310 20:14:53.235153    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210310191609-6496
	* I0310 20:14:53.247432    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210114204234-6692
	* I0310 20:14:53.260904    7892 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	* I0310 20:14:53.277701    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120231122-7024
	* I0310 20:14:53.281242    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210309234032-4944
	* I0310 20:14:53.281242    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210115023213-8464
	* W0310 20:14:53.286317    7892 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:14:53.286317    7892 retry.go:31] will retry after 231.159374ms: ssh: rejected: connect failed (open failed)
	* W0310 20:14:53.301525    7892 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:14:53.301525    7892 retry.go:31] will retry after 296.705768ms: ssh: rejected: connect failed (open failed)
	* W0310 20:14:53.301525    7892 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:14:53.301525    7892 retry.go:31] will retry after 141.409254ms: ssh: rejected: connect failed (open failed)
	* I0310 20:14:53.303637    7892 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	* I0310 20:14:53.326653    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210304002630-1156
	* I0310 20:14:53.335674    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.347657    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210306072141-12056
	* I0310 20:14:53.350656    7892 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	* I0310 20:14:53.361680    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.368653    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210219220622-3920
	* I0310 20:14:53.370651    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210119220838-6552
	* I0310 20:14:53.381930    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.387174    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.412120    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210301195830-5700
	* I0310 20:14:53.435512    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210308233820-5396
	* I0310 20:14:53.443303    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.446785    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210220004129-7452
	* I0310 20:14:53.449693    7892 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	* W0310 20:14:53.455106    7892 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 20:14:53.455376    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.468548    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.478257    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.506996    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210128021318-232
	* I0310 20:14:53.513791    7892 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	* I0310 20:14:53.521902    7892 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	* W0310 20:14:53.524645    7892 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 20:14:53.529644    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210304184021-4052
	* I0310 20:14:53.530574    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.535894    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210224014800-800
	* I0310 20:14:53.535894    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.547252    7892 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	* I0310 20:14:53.547252    7892 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	* I0310 20:14:53.550430    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.558637    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.599798    7892 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	* I0310 20:14:53.621871    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.642582    7892 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	* I0310 20:14:53.661910    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210212145109-352
	* I0310 20:14:53.708664    7892 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	* I0310 20:14:53.712678    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120214442-10992
	* I0310 20:14:53.714803    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.723672    7892 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	* I0310 20:14:53.725598    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120022529-1140
	* I0310 20:14:53.783100    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.786063    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.794767    7892 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210213143925-7440
	* I0310 20:14:53.841229    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* W0310 20:14:53.862438    7892 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 20:14:53.880069    7892 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 20:14:53.880069    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	* I0310 20:14:53.880069    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 20:14:53.880069    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 20:14:53.887268    7892 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 20:14:53.887439    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	* I0310 20:14:53.887439    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 20:14:53.887439    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* W0310 20:14:53.902634    7892 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 20:14:53.918098    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	* I0310 20:14:53.922082    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	* I0310 20:14:53.935620    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:53.939013    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:54.057843    7892 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 20:14:54.057843    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	* I0310 20:14:54.057843    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:14:54.057843    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:14:54.057843    7892 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 20:14:54.057843    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	* I0310 20:14:54.057843    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 20:14:54.057843    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 20:14:54.128773    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:14:54.158389    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	* I0310 20:14:54.177749    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* W0310 20:14:54.182517    7892 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* W0310 20:14:54.200514    7892 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* W0310 20:14:54.207663    7892 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 20:14:54.214612    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:54.360038    7892 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 20:14:54.360038    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	* I0310 20:14:54.360523    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 20:14:54.360523    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 20:14:54.381491    7892 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 20:14:54.381491    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	* I0310 20:14:54.381491    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 20:14:54.381491    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 20:14:54.382238    7892 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 20:14:54.382512    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	* I0310 20:14:54.382512    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 20:14:54.382512    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 20:14:54.385816    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	* I0310 20:14:54.417738    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:54.420124    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 20:14:54.452774    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 20:14:54.460369    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:54.484968    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:14:54.865564    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.4222679s)
	* I0310 20:14:54.868177    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:54.909706    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5740394s)
	* I0310 20:14:54.909706    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:54.923902    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5417238s)
	* I0310 20:14:54.924187    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:54.960076    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5984035s)
	* I0310 20:14:54.961108    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:54.974071    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.586905s)
	* I0310 20:14:54.974071    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.116331    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5577011s)
	* I0310 20:14:55.116803    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.152516    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.6839763s)
	* I0310 20:14:55.152945    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.232669    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.7773008s)
	* I0310 20:14:55.232669    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.249721    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.6992986s)
	* I0310 20:14:55.249721    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.272717    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5579208s)
	* I0310 20:14:55.273727    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.274981    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.739095s)
	* I0310 20:14:55.274981    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.291932    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.7613657s)
	* I0310 20:14:55.292209    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.8134503s)
	* I0310 20:14:55.292209    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.292209    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.303676    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.6818128s)
	* I0310 20:14:55.303943    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.336849    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.553756s)
	* I0310 20:14:55.340410    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.375237    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5340148s)
	* I0310 20:14:55.375416    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.398400    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.612344s)
	* I0310 20:14:55.398741    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.457772    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5219653s)
	* I0310 20:14:55.458024    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.458024    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.2434169s)
	* I0310 20:14:55.458024    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* W0310 20:14:55.475839    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:14:55.475839    7892 retry.go:31] will retry after 164.129813ms: ssh: handshake failed: EOF
	* I0310 20:14:55.483187    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0654541s)
	* I0310 20:14:55.483187    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.491856    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.3141131s)
	* I0310 20:14:55.492070    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.533468    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5944625s)
	* I0310 20:14:55.533929    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.577510    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1171462s)
	* I0310 20:14:55.578338    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:14:55.578815    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0938525s)
	* I0310 20:14:55.578942    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* W0310 20:14:55.832635    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:14:55.832635    7892 retry.go:31] will retry after 149.242379ms: ssh: handshake failed: EOF
	* W0310 20:14:55.877006    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:14:55.877006    7892 retry.go:31] will retry after 200.227965ms: ssh: handshake failed: EOF
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.7486482s)
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210310083645-5040: (8.9060272s)
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210126212539-5172: (8.8741453s)
	* I0310 20:15:01.935162    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: "minikube-local-cache-test:functional-20210126212539-5172" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.935162    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 20:15:01.935162    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 20:15:01.935162    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: "minikube-local-cache-test:functional-20210310083645-5040" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.935162    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 20:15:01.935162    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210219145454-9520: (8.8389145s)
	* I0310 20:15:01.935162    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: "minikube-local-cache-test:functional-20210219145454-9520" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.935162    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 20:15:01.935162    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 20:15:01.935162    7892 out.go:129] * Enabled addons: storage-provisioner, default-storageclass
	* I0310 20:15:01.935162    7892 addons.go:383] enableAddons completed in 13.2484069s
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210225231842-5736: (8.819743s)
	* I0310 20:15:01.935162    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: "minikube-local-cache-test:functional-20210225231842-5736" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210120175851-7432: (8.814742s)
	* I0310 20:15:01.935162    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 20:15:01.935162    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 20:15:01.935162    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: "minikube-local-cache-test:functional-20210120175851-7432" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.935162    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 20:15:01.935162    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210123004019-5372: (8.7788237s)
	* I0310 20:15:01.935162    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: "minikube-local-cache-test:functional-20210123004019-5372" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.935162    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 20:15:01.935162    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210303214129-4588: (8.7678241s)
	* I0310 20:15:01.935162    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: "minikube-local-cache-test:functional-20210303214129-4588" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.935162    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 20:15:01.935162    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210115191024-3516: (8.7179961s)
	* I0310 20:15:01.935162    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: "minikube-local-cache-test:functional-20210115191024-3516" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.935162    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 20:15:01.935162    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210310191609-6496: (8.7000488s)
	* I0310 20:15:01.946419    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: "minikube-local-cache-test:functional-20210310191609-6496" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.946419    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 20:15:01.946419    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210114204234-6692: (8.6877703s)
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210119220838-6552: (8.5645503s)
	* I0310 20:15:01.946996    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: "minikube-local-cache-test:functional-20210114204234-6692" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.946996    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 20:15:01.946996    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210306072141-12056: (8.5875441s)
	* I0310 20:15:01.935162    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210224014800-800: (8.3993065s)
	* I0310 20:15:01.947310    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: "minikube-local-cache-test:functional-20210306072141-12056" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.947310    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: "minikube-local-cache-test:functional-20210119220838-6552" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.947310    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:15:01.947310    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:15:01.947310    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 20:15:01.947310    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 20:15:01.947536    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: "minikube-local-cache-test:functional-20210224014800-800" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:01.947536    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* I0310 20:15:01.947536    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* I0310 20:15:01.954944    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	* I0310 20:15:01.957493    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	* I0310 20:15:01.965871    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 20:15:01.966135    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	* I0310 20:15:01.982479    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	* I0310 20:15:01.985901    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	* I0310 20:15:01.986077    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	* I0310 20:15:01.999562    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	* I0310 20:15:02.002580    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	* I0310 20:15:02.019581    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:15:02.022561    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	* I0310 20:15:02.023572    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	* I0310 20:15:02.023572    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* W0310 20:15:02.081243    7892 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:15:02.081243    7892 retry.go:31] will retry after 253.803157ms: ssh: rejected: connect failed (open failed)
	* W0310 20:15:02.081986    7892 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:15:02.081986    7892 retry.go:31] will retry after 328.409991ms: ssh: rejected: connect failed (open failed)
	* W0310 20:15:02.081986    7892 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 20:15:02.081986    7892 retry.go:31] will retry after 178.565968ms: ssh: rejected: connect failed (open failed)
	* I0310 20:15:02.279417    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:02.349302    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210220004129-7452: (8.9025572s)
	* I0310 20:15:02.349962    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: "minikube-local-cache-test:functional-20210220004129-7452" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:02.349962    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 20:15:02.349962    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 20:15:02.353738    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:02.376164    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	* I0310 20:15:02.390095    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:02.432041    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:02.774115    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210304002630-1156: (9.44665s)
	* I0310 20:15:02.774115    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: "minikube-local-cache-test:functional-20210304002630-1156" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 20:15:02.774115    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 20:15:02.774115    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 20:15:02.787021    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	* I0310 20:15:02.796764    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:02.910407    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:02.984351    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.984351    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 20:15:02.984351    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.984351    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 20:15:02.984351    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 20:15:02.984745    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 20:15:02.984745    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.984745    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* I0310 20:15:02.984745    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* I0310 20:15:02.985959    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.985959    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:15:02.985959    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:15:02.985959    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.986406    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 20:15:02.986406    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 20:15:02.986406    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.986406    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 20:15:02.986748    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 20:15:02.986748    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.986748    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.986748    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 20:15:02.986748    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 20:15:02.986748    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 20:15:02.986748    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 20:15:02.986748    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.986748    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 20:15:02.986748    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 20:15:02.986748    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.986748    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* I0310 20:15:02.986748    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* I0310 20:15:02.987539    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.987539    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 20:15:02.987539    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 20:15:02.988152    7892 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 20:15:02.988152    7892 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 20:15:02.988152    7892 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 20:15:03.036839    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	* I0310 20:15:03.044729    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	* I0310 20:15:03.052299    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.060986    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.066771    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:03.082533    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	* I0310 20:15:03.085527    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:03.087344    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	* I0310 20:15:03.095204    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.098216    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	* I0310 20:15:03.117987    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.123172    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.146477    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	* I0310 20:15:03.149124    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	* I0310 20:15:03.149450    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	* I0310 20:15:03.149450    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	* I0310 20:15:03.149450    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:15:03.150047    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	* I0310 20:15:03.151447    7892 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	* I0310 20:15:03.169887    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.170846    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.170846    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.176401    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.176401    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.179071    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.184249    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:03.243975    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:03.683044    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:03.925501    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:03.969666    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:03.982828    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:04.041868    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:04.048745    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:04.101042    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:04.125805    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:04.158410    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:04.174901    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0569182s)
	* I0310 20:15:04.174901    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:04.197105    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.026263s)
	* I0310 20:15:04.197105    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* W0310 20:15:04.227972    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:04.227972    7892 retry.go:31] will retry after 220.164297ms: ssh: handshake failed: EOF
	* I0310 20:15:04.228118    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0438744s)
	* I0310 20:15:04.228118    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:04.233746    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0546794s)
	* I0310 20:15:04.234077    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* W0310 20:15:04.242673    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:04.243490    7892 retry.go:31] will retry after 204.514543ms: ssh: handshake failed: EOF
	* I0310 20:15:04.260874    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210212145109-352: (10.5990113s)
	* I0310 20:15:04.260874    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: (10.1025298s)
	* I0310 20:15:04.261128    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856': No such file or directory
	* I0310 20:15:04.261128    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: (10.3390934s)
	* I0310 20:15:04.261128    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512': No such file or directory
	* I0310 20:15:04.261128    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	* I0310 20:15:04.261128    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	* I0310 20:15:04.261612    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210308233820-5396: (10.826149s)
	* I0310 20:15:04.261855    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210120214442-10992: (10.5492254s)
	* I0310 20:15:04.261855    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: (10.3438037s)
	* I0310 20:15:04.261855    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210309234032-4944: (10.9806631s)
	* I0310 20:15:04.261855    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160': No such file or directory
	* I0310 20:15:04.261855    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: (9.8760835s)
	* I0310 20:15:04.261855    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	* I0310 20:15:04.261855    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492': No such file or directory
	* I0310 20:15:04.262153    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210120231122-7024: (10.9845024s)
	* I0310 20:15:04.262153    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	* I0310 20:15:04.262153    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (10.1334263s)
	* I0310 20:15:04.262153    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088': No such file or directory
	* I0310 20:15:04.262153    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	* I0310 20:15:04.264944    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210304184021-4052: (10.7353479s)
	* I0310 20:15:04.270902    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.271664    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.273360    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.273561    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.277150    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* W0310 20:15:04.666164    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:04.666516    7892 retry.go:31] will retry after 363.333692ms: ssh: handshake failed: EOF
	* I0310 20:15:04.794529    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210115023213-8464: (11.5133388s)
	* I0310 20:15:04.794760    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (10.3744512s)
	* I0310 20:15:04.794760    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984': No such file or directory
	* I0310 20:15:04.794760    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	* I0310 20:15:04.795231    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210128021318-232: (11.2882858s)
	* I0310 20:15:04.795231    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210301195830-5700: (11.383162s)
	* I0310 20:15:04.795787    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210120022529-1140: (11.0702384s)
	* I0310 20:15:04.796154    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: (10.343427s)
	* I0310 20:15:04.796154    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748': No such file or directory
	* I0310 20:15:04.796154    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: (2.8300321s)
	* I0310 20:15:04.796154    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372': No such file or directory
	* I0310 20:15:04.796154    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372 (4096 bytes)
	* I0310 20:15:04.796154    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	* I0310 20:15:04.796506    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: (2.8415748s)
	* I0310 20:15:04.796506    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520': No such file or directory
	* I0310 20:15:04.796729    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520 (4096 bytes)
	* I0310 20:15:04.796729    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (2.8392491s)
	* I0310 20:15:04.796729    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: (2.8308712s)
	* I0310 20:15:04.796729    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432': No such file or directory
	* I0310 20:15:04.796729    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172': No such file or directory
	* I0310 20:15:04.796729    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: (2.7971795s)
	* I0310 20:15:04.796729    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	* I0310 20:15:04.797331    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	* I0310 20:15:04.797331    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (2.8148647s)
	* I0310 20:15:04.797331    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736': No such file or directory
	* I0310 20:15:04.796729    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496': No such file or directory
	* I0310 20:15:04.797331    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: (2.8114419s)
	* I0310 20:15:04.797331    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	* I0310 20:15:04.797331    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516': No such file or directory
	* I0310 20:15:04.797331    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: (2.8112663s)
	* I0310 20:15:04.797331    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	* I0310 20:15:04.797331    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588': No such file or directory
	* I0310 20:15:04.797331    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588 (4096 bytes)
	* I0310 20:15:04.800000    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (2.7964435s)
	* I0310 20:15:04.800000    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800': No such file or directory
	* I0310 20:15:04.800270    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800 (4096 bytes)
	* I0310 20:15:04.800270    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	* I0310 20:15:04.802715    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (2.7831471s)
	* I0310 20:15:04.802715    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552': No such file or directory
	* I0310 20:15:04.802940    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552 (4096 bytes)
	* I0310 20:15:04.817788    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.818795    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.819798    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.824799    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.827908    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.833090    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.834141    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.837795    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.837795    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210213143925-7440: (11.0387771s)
	* I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.846674    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	* I0310 20:15:04.999926    7892 ssh_runner.go:189] Completed: docker image inspect --format  minikube-local-cache-test:functional-20210219220622-3920: (11.6313254s)
	* I0310 20:15:05.054318    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.105264    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.159310    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.192590    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.214052    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* W0310 20:15:05.290838    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:05.290838    7892 retry.go:31] will retry after 195.758538ms: ssh: handshake failed: EOF
	* W0310 20:15:05.292073    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:05.292073    7892 retry.go:31] will retry after 198.275464ms: ssh: handshake failed: EOF
	* I0310 20:15:05.745015    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.809670    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.838850    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0190562s)
	* I0310 20:15:05.838850    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.847638    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0288477s)
	* I0310 20:15:05.847833    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.864028    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0262376s)
	* I0310 20:15:05.864633    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.915260    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.087356s)
	* I0310 20:15:05.915260    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.962206    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1199955s)
	* I0310 20:15:05.962488    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.970172    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1360359s)
	* I0310 20:15:05.970172    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.977603    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1528088s)
	* I0310 20:15:05.977787    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.980122    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1334533s)
	* I0310 20:15:05.980122    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.992126    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.149915s)
	* I0310 20:15:05.992392    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1501816s)
	* I0310 20:15:05.992392    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* I0310 20:15:05.992392    7892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55074 SSHKeyPath:C:\Users\jenkins\.minikube\machines\skaffold-20210310201235-6496\id_rsa Username:docker}
	* W0310 20:15:06.131146    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:06.131146    7892 retry.go:31] will retry after 294.771169ms: ssh: handshake failed: EOF
	* W0310 20:15:06.271767    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:06.271767    7892 retry.go:31] will retry after 179.638263ms: ssh: handshake failed: EOF
	* W0310 20:15:06.429422    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:06.429422    7892 retry.go:31] will retry after 175.796719ms: ssh: handshake failed: EOF
	* W0310 20:15:06.429422    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:06.429422    7892 retry.go:31] will retry after 215.217854ms: ssh: handshake failed: EOF
	* W0310 20:15:06.889137    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:06.889137    7892 retry.go:31] will retry after 401.502479ms: ssh: handshake failed: EOF
	* I0310 20:15:07.286737    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (5.263189s)
	* I0310 20:15:07.286737    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040': No such file or directory
	* I0310 20:15:07.287813    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040 (4096 bytes)
	* W0310 20:15:07.299091    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:07.300061    7892 retry.go:31] will retry after 766.401434ms: ssh: handshake failed: EOF
	* I0310 20:15:07.440463    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: (4.6534626s)
	* I0310 20:15:07.440463    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156': No such file or directory
	* I0310 20:15:07.441366    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	* I0310 20:15:07.472746    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (5.0966052s)
	* I0310 20:15:07.472746    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452': No such file or directory
	* I0310 20:15:07.473404    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	* I0310 20:15:07.730495    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: (4.6322992s)
	* I0310 20:15:07.730495    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992': No such file or directory
	* I0310 20:15:07.731099    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992 (4096 bytes)
	* I0310 20:15:07.735684    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: (5.7121372s)
	* I0310 20:15:07.735684    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056': No such file or directory
	* I0310 20:15:07.736035    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056 (4096 bytes)
	* I0310 20:15:07.772647    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: (4.6853238s)
	* I0310 20:15:07.772647    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024': No such file or directory
	* I0310 20:15:07.772647    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024 (4096 bytes)
	* I0310 20:15:07.992006    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: (5.9694718s)
	* I0310 20:15:07.992839    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692': No such file or directory
	* I0310 20:15:07.992839    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: (4.8434103s)
	* I0310 20:15:07.992839    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140': No such file or directory
	* I0310 20:15:07.992839    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692 (4096 bytes)
	* I0310 20:15:07.992839    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	* I0310 20:15:08.041557    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: (4.8924546s)
	* I0310 20:15:08.041557    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232': No such file or directory
	* I0310 20:15:08.041842    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	* I0310 20:15:08.063979    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (5.0192715s)
	* I0310 20:15:08.064301    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440': No such file or directory
	* I0310 20:15:08.064301    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440 (4096 bytes)
	* I0310 20:15:08.075071    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (4.923646s)
	* I0310 20:15:08.075071    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700': No such file or directory
	* I0310 20:15:08.075071    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700 (4096 bytes)
	* W0310 20:15:08.086399    7892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 20:15:08.242088    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: (5.1590205s)
	* I0310 20:15:08.242088    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396': No such file or directory
	* I0310 20:15:08.242088    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396 (4096 bytes)
	* I0310 20:15:08.787986    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (5.6385613s)
	* I0310 20:15:08.787986    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052': No such file or directory
	* I0310 20:15:08.787986    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052 (4096 bytes)
	* I0310 20:15:08.787986    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: (5.6415343s)
	* I0310 20:15:08.787986    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920': No such file or directory
	* I0310 20:15:08.788406    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	* I0310 20:15:08.829669    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: (5.6802442s)
	* I0310 20:15:08.829669    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352': No such file or directory
	* I0310 20:15:08.830464    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352 (4096 bytes)
	* I0310 20:15:09.170719    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: (6.1339072s)
	* I0310 20:15:09.170856    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464': No such file or directory
	* I0310 20:15:09.170985    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464 (4096 bytes)
	* I0310 20:15:09.266833    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	* I0310 20:15:09.274337    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	* I0310 20:15:10.172050    7892 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (7.0220347s)
	* I0310 20:15:10.172226    7892 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944': No such file or directory
	* I0310 20:15:10.172490    7892 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944 (4096 bytes)
	* I0310 20:15:13.249100    7892 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: (3.9747808s)
	* I0310 20:15:13.249100    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 from cache
	* I0310 20:15:13.250005    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	* I0310 20:15:13.257510    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	* I0310 20:15:14.038679    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 from cache
	* I0310 20:15:14.039114    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	* I0310 20:15:14.047740    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	* I0310 20:15:14.384565    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 from cache
	* I0310 20:15:14.384565    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	* I0310 20:15:14.392739    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	* I0310 20:15:14.710837    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 from cache
	* I0310 20:15:14.710837    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	* I0310 20:15:14.719617    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	* I0310 20:15:14.999397    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 from cache
	* I0310 20:15:14.999743    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	* I0310 20:15:15.010374    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	* I0310 20:15:15.325370    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 from cache
	* I0310 20:15:15.325370    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	* I0310 20:15:15.333452    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	* I0310 20:15:15.640733    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 from cache
	* I0310 20:15:15.640733    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	* I0310 20:15:15.652250    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	* I0310 20:15:15.920401    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 from cache
	* I0310 20:15:15.920401    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	* I0310 20:15:15.937207    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	* I0310 20:15:16.267033    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 from cache
	* I0310 20:15:16.267033    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:15:16.278402    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* I0310 20:15:16.564151    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 from cache
	* I0310 20:15:16.564151    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	* I0310 20:15:16.580207    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	* I0310 20:15:16.916918    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 from cache
	* I0310 20:15:16.916918    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	* I0310 20:15:16.935670    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	* I0310 20:15:17.300310    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 from cache
	* I0310 20:15:17.300604    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	* I0310 20:15:17.308977    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	* I0310 20:15:17.615038    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 from cache
	* I0310 20:15:17.615038    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 20:15:17.623056    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 20:15:17.938384    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 from cache
	* I0310 20:15:17.938384    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	* I0310 20:15:17.946484    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	* I0310 20:15:18.281423    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 from cache
	* I0310 20:15:18.281423    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:15:18.291813    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 20:15:18.614494    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 from cache
	* I0310 20:15:18.614645    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:15:18.624438    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 20:15:18.974341    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 from cache
	* I0310 20:15:18.974341    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 20:15:18.983410    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 20:15:19.306877    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 from cache
	* I0310 20:15:19.306877    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	* I0310 20:15:19.321518    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	* I0310 20:15:19.614643    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 from cache
	* I0310 20:15:19.614643    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	* I0310 20:15:19.628728    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	* I0310 20:15:19.928264    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 from cache
	* I0310 20:15:19.930648    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	* I0310 20:15:19.945699    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	* I0310 20:15:20.320459    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 from cache
	* I0310 20:15:20.320459    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	* I0310 20:15:20.328219    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	* I0310 20:15:20.631152    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 from cache
	* I0310 20:15:20.631283    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 20:15:20.640308    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 20:15:20.930105    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 from cache
	* I0310 20:15:20.930105    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	* I0310 20:15:20.938422    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	* I0310 20:15:21.257225    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 from cache
	* I0310 20:15:21.257225    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	* I0310 20:15:21.266397    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	* I0310 20:15:21.562346    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 from cache
	* I0310 20:15:21.562346    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	* I0310 20:15:21.575086    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	* I0310 20:15:21.872333    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 from cache
	* I0310 20:15:21.872333    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	* I0310 20:15:21.883132    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	* I0310 20:15:22.285958    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 from cache
	* I0310 20:15:22.285958    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 20:15:22.297756    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 20:15:22.619092    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 from cache
	* I0310 20:15:22.619092    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	* I0310 20:15:22.636377    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	* I0310 20:15:23.052509    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 from cache
	* I0310 20:15:23.052509    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	* I0310 20:15:23.062704    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	* I0310 20:15:23.352946    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 from cache
	* I0310 20:15:23.352946    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	* I0310 20:15:23.360952    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	* I0310 20:15:23.646602    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 from cache
	* I0310 20:15:23.646602    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	* I0310 20:15:23.664354    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	* I0310 20:15:23.944297    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 from cache
	* I0310 20:15:23.944870    7892 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	* I0310 20:15:23.962287    7892 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	* I0310 20:15:24.274159    7892 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 from cache
	* I0310 20:15:24.274159    7892 cache_images.go:80] LoadImages completed in 31.2903647s
	* W0310 20:15:24.274159    7892 cache_images.go:215] Failed to load cached images for profile skaffold-20210310201235-6496. make sure the profile is running. loading cached images: transferring cached image: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 20:15:24.274159    7892 cache_images.go:223] succeeded pushing to: 
	* I0310 20:15:24.274159    7892 cache_images.go:224] failed pushing to: skaffold-20210310201235-6496
	* I0310 20:15:24.472210    7892 start.go:460] kubectl: 1.19.3, cluster: 1.20.2 (minor skew: 1)
	* I0310 20:15:24.478399    7892 out.go:129] * Done! kubectl is now configured to use "skaffold-20210310201235-6496" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:15:32.795699    1228 out.go:340] unable to execute * 2021-03-10 20:14:49.481988 W | etcdserver: request "header:<ID:10490704451290621844 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/skaffold-20210310201235-6496.166b145ee477dbb0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/skaffold-20210310201235-6496.166b145ee477dbb0\" value_size:636 lease:1267332414435845947 >> failure:<>>" with result "size:16" took too long (211.8104ms) to execute
	: html/template:* 2021-03-10 20:14:49.481988 W | etcdserver: request "header:<ID:10490704451290621844 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/skaffold-20210310201235-6496.166b145ee477dbb0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/skaffold-20210310201235-6496.166b145ee477dbb0\" value_size:636 lease:1267332414435845947 >> failure:<>>" with result "size:16" took too long (211.8104ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:15:32.813731    1228 out.go:340] unable to execute * 2021-03-10 20:15:04.931269 W | etcdserver: request "header:<ID:10490704451290622131 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/skaffold-20210310201235-6496\" mod_revision:378 > success:<request_put:<key:\"/registry/minions/skaffold-20210310201235-6496\" value_size:5298 >> failure:<request_range:<key:\"/registry/minions/skaffold-20210310201235-6496\" > >>" with result "size:5529" took too long (167.5234ms) to execute
	: html/template:* 2021-03-10 20:15:04.931269 W | etcdserver: request "header:<ID:10490704451290622131 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/skaffold-20210310201235-6496\" mod_revision:378 > success:<request_put:<key:\"/registry/minions/skaffold-20210310201235-6496\" value_size:5298 >> failure:<request_range:<key:\"/registry/minions/skaffold-20210310201235-6496\" > >>" with result "size:5529" took too long (167.5234ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:15:32.832003    1228 out.go:340] unable to execute * 2021-03-10 20:15:05.153288 W | etcdserver: request "header:<ID:10490704451290622140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:257 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3511 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>" with result "size:16" took too long (121.1325ms) to execute
	: html/template:* 2021-03-10 20:15:05.153288 W | etcdserver: request "header:<ID:10490704451290622140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:257 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3511 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>" with result "size:16" took too long (121.1325ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:15:32.853128    1228 out.go:340] unable to execute * 2021-03-10 20:15:05.791238 W | etcdserver: request "header:<ID:10490704451290622154 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:279 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:2668 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (317.1241ms) to execute
	: html/template:* 2021-03-10 20:15:05.791238 W | etcdserver: request "header:<ID:10490704451290622154 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:279 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:2668 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (317.1241ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 20:15:34.250925    1228 out.go:335] unable to parse "* I0310 20:12:37.920558    7892 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:12:37.920558    7892 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:15:34.281127    1228 out.go:335] unable to parse "* I0310 20:12:40.546379    7892 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:12:40.546379    7892 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:15:34.416467    1228 out.go:340] unable to execute * I0310 20:12:41.805498    7892 cli_runner.go:115] Run: docker network inspect skaffold-20210310201235-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 20:12:41.805498    7892 cli_runner.go:115] Run: docker network inspect skaffold-20210310201235-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:284: executing "* I0310 20:12:41.805498    7892 cli_runner.go:115] Run: docker network inspect skaffold-20210310201235-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:34.424654    1228 out.go:340] unable to execute * W0310 20:12:42.275262    7892 cli_runner.go:162] docker network inspect skaffold-20210310201235-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 20:12:42.275262    7892 cli_runner.go:162] docker network inspect skaffold-20210310201235-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:279: executing "* W0310 20:12:42.275262    7892 cli_runner.go:162] docker network inspect skaffold-20210310201235-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:34.505339    1228 out.go:340] unable to execute * I0310 20:12:42.758602    7892 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 20:12:42.758602    7892 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 20:12:42.758602    7892 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:34.561137    1228 out.go:335] unable to parse "* I0310 20:12:46.675191    7892 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 20:12:46.675191    7892 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 20:15:34.761411    1228 out.go:335] unable to parse "* I0310 20:12:47.463494    7892 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 20:12:47.463494    7892 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 20:15:34.817495    1228 out.go:340] unable to execute * I0310 20:12:55.148633    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:12:55.148633    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:12:55.148633    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:34.833325    1228 out.go:335] unable to parse "* I0310 20:12:55.632345    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}\n": template: * I0310 20:12:55.632345    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:15:34.859237    1228 out.go:340] unable to execute * I0310 20:12:55.920350    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:12:55.920350    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:12:55.920350    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:34.869884    1228 out.go:335] unable to parse "* I0310 20:12:56.382756    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}\n": template: * I0310 20:12:56.382756    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:15:34.956783    1228 out.go:340] unable to execute * I0310 20:12:57.306853    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:12:57.306853    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:12:57.306853    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:34.983403    1228 out.go:340] unable to execute * I0310 20:12:58.076509    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:12:58.076509    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:12:58.076509    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:34.993837    1228 out.go:335] unable to parse "* I0310 20:12:58.546506    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}\n": template: * I0310 20:12:58.546506    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:15:35.019772    1228 out.go:340] unable to execute * I0310 20:12:58.784425    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:12:58.784425    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:12:58.784425    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:35.028813    1228 out.go:335] unable to parse "* I0310 20:12:59.245006    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}\n": template: * I0310 20:12:59.245006    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:15:35.396153    1228 out.go:340] unable to execute * I0310 20:12:59.490271    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:12:59.490271    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:12:59.490271    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:35.406391    1228 out.go:335] unable to parse "* I0310 20:12:59.953652    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}\n": template: * I0310 20:12:59.953652    7892 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55074 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 20:15:35.688720    1228 out.go:340] unable to execute * I0310 20:13:01.582670    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:13:01.582670    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:13:01.582670    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:35.751101    1228 out.go:340] unable to execute * I0310 20:13:02.937412    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:13:02.937412    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:13:02.937412    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:35.779034    1228 out.go:340] unable to execute * I0310 20:13:04.073262    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:13:04.073262    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:13:04.073262    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:35.786062    1228 out.go:340] unable to execute * I0310 20:13:04.074269    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:13:04.074269    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:13:04.074269    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:35.835034    1228 out.go:340] unable to execute * I0310 20:13:06.233425    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:13:06.233425    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:13:06.233425    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:37.735309    1228 out.go:340] unable to execute * I0310 20:14:49.002155    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:49.002155    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:49.002155    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.202994    1228 out.go:340] unable to execute * I0310 20:14:50.463484    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.461336s)
	: template: * I0310 20:14:50.463484    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.461336s)
	:1:102: executing "* I0310 20:14:50.463484    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.461336s)\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.223301    1228 out.go:340] unable to execute * I0310 20:14:50.500859    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:50.500859    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:50.500859    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.288495    1228 out.go:340] unable to execute * I0310 20:14:50.821852    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:50.821852    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:50.821852    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.311786    1228 out.go:340] unable to execute * I0310 20:14:51.103046    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:51.103046    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:51.103046    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.505968    1228 out.go:340] unable to execute * I0310 20:14:53.335674    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.335674    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.335674    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.518556    1228 out.go:340] unable to execute * I0310 20:14:53.361680    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.361680    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.361680    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.534997    1228 out.go:340] unable to execute * I0310 20:14:53.381930    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.381930    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.381930    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.544971    1228 out.go:340] unable to execute * I0310 20:14:53.387174    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.387174    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.387174    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.559593    1228 out.go:340] unable to execute * I0310 20:14:53.443303    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.443303    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.443303    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.578137    1228 out.go:340] unable to execute * I0310 20:14:53.455376    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.455376    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.455376    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.586987    1228 out.go:340] unable to execute * I0310 20:14:53.468548    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.468548    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.468548    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.596772    1228 out.go:340] unable to execute * I0310 20:14:53.478257    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.478257    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.478257    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.621607    1228 out.go:340] unable to execute * I0310 20:14:53.530574    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.530574    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.530574    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.634087    1228 out.go:340] unable to execute * I0310 20:14:53.535894    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.535894    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.535894    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.650312    1228 out.go:340] unable to execute * I0310 20:14:53.550430    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.550430    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.550430    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.657389    1228 out.go:340] unable to execute * I0310 20:14:53.558637    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.558637    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.558637    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.667332    1228 out.go:340] unable to execute * I0310 20:14:53.621871    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.621871    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.621871    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.693849    1228 out.go:340] unable to execute * I0310 20:14:53.714803    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.714803    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.714803    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.705988    1228 out.go:340] unable to execute * I0310 20:14:53.783100    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.783100    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.783100    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.715919    1228 out.go:340] unable to execute * I0310 20:14:53.786063    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.786063    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.786063    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.728658    1228 out.go:340] unable to execute * I0310 20:14:53.841229    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.841229    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.841229    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.782313    1228 out.go:340] unable to execute * I0310 20:14:53.935620    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.935620    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.935620    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.792457    1228 out.go:340] unable to execute * I0310 20:14:53.939013    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:53.939013    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:53.939013    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.835762    1228 out.go:340] unable to execute * I0310 20:14:54.177749    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:54.177749    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:54.177749    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.849768    1228 out.go:340] unable to execute * I0310 20:14:54.214612    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:54.214612    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:54.214612    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.898552    1228 out.go:340] unable to execute * I0310 20:14:54.417738    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:54.417738    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:54.417738    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.910555    1228 out.go:340] unable to execute * I0310 20:14:54.460369    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:54.460369    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:54.460369    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.917740    1228 out.go:340] unable to execute * I0310 20:14:54.484968    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:14:54.484968    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:14:54.484968    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.924231    1228 out.go:340] unable to execute * I0310 20:14:54.865564    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.4222679s)
	: template: * I0310 20:14:54.865564    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.4222679s)
	:1:102: executing "* I0310 20:14:54.865564    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.4222679s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.935379    1228 out.go:340] unable to execute * I0310 20:14:54.909706    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5740394s)
	: template: * I0310 20:14:54.909706    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5740394s)
	:1:102: executing "* I0310 20:14:54.909706    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.5740394s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.945646    1228 out.go:340] unable to execute * I0310 20:14:54.923902    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5417238s)
	: template: * I0310 20:14:54.923902    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5417238s)
	:1:102: executing "* I0310 20:14:54.923902    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.5417238s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.954601    1228 out.go:340] unable to execute * I0310 20:14:54.960076    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5984035s)
	: template: * I0310 20:14:54.960076    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5984035s)
	:1:102: executing "* I0310 20:14:54.960076    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.5984035s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.963602    1228 out.go:340] unable to execute * I0310 20:14:54.974071    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.586905s)
	: template: * I0310 20:14:54.974071    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.586905s)
	:1:102: executing "* I0310 20:14:54.974071    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.586905s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.973601    1228 out.go:340] unable to execute * I0310 20:14:55.116331    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5577011s)
	: template: * I0310 20:14:55.116331    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5577011s)
	:1:102: executing "* I0310 20:14:55.116331    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.5577011s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.982611    1228 out.go:340] unable to execute * I0310 20:14:55.152516    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.6839763s)
	: template: * I0310 20:14:55.152516    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.6839763s)
	:1:102: executing "* I0310 20:14:55.152516    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.6839763s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:38.994179    1228 out.go:340] unable to execute * I0310 20:14:55.232669    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.7773008s)
	: template: * I0310 20:14:55.232669    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.7773008s)
	:1:102: executing "* I0310 20:14:55.232669    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.7773008s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.009489    1228 out.go:340] unable to execute * I0310 20:14:55.249721    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.6992986s)
	: template: * I0310 20:14:55.249721    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.6992986s)
	:1:102: executing "* I0310 20:14:55.249721    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.6992986s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.018323    1228 out.go:340] unable to execute * I0310 20:14:55.272717    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5579208s)
	: template: * I0310 20:14:55.272717    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5579208s)
	:1:102: executing "* I0310 20:14:55.272717    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.5579208s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.028400    1228 out.go:340] unable to execute * I0310 20:14:55.274981    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.739095s)
	: template: * I0310 20:14:55.274981    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.739095s)
	:1:102: executing "* I0310 20:14:55.274981    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.739095s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.041043    1228 out.go:340] unable to execute * I0310 20:14:55.291932    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.7613657s)
	: template: * I0310 20:14:55.291932    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.7613657s)
	:1:102: executing "* I0310 20:14:55.291932    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.7613657s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.052270    1228 out.go:340] unable to execute * I0310 20:14:55.292209    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.8134503s)
	: template: * I0310 20:14:55.292209    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.8134503s)
	:1:102: executing "* I0310 20:14:55.292209    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.8134503s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.064784    1228 out.go:340] unable to execute * I0310 20:14:55.303676    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.6818128s)
	: template: * I0310 20:14:55.303676    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.6818128s)
	:1:102: executing "* I0310 20:14:55.303676    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.6818128s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.074784    1228 out.go:340] unable to execute * I0310 20:14:55.336849    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.553756s)
	: template: * I0310 20:14:55.336849    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.553756s)
	:1:102: executing "* I0310 20:14:55.336849    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.553756s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.086463    1228 out.go:340] unable to execute * I0310 20:14:55.375237    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5340148s)
	: template: * I0310 20:14:55.375237    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5340148s)
	:1:102: executing "* I0310 20:14:55.375237    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.5340148s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.101050    1228 out.go:340] unable to execute * I0310 20:14:55.398400    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.612344s)
	: template: * I0310 20:14:55.398400    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.612344s)
	:1:102: executing "* I0310 20:14:55.398400    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.612344s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.112335    1228 out.go:340] unable to execute * I0310 20:14:55.457772    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5219653s)
	: template: * I0310 20:14:55.457772    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5219653s)
	:1:102: executing "* I0310 20:14:55.457772    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.5219653s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.122666    1228 out.go:340] unable to execute * I0310 20:14:55.458024    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.2434169s)
	: template: * I0310 20:14:55.458024    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.2434169s)
	:1:102: executing "* I0310 20:14:55.458024    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.2434169s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.147239    1228 out.go:340] unable to execute * I0310 20:14:55.483187    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0654541s)
	: template: * I0310 20:14:55.483187    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0654541s)
	:1:102: executing "* I0310 20:14:55.483187    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.0654541s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.157355    1228 out.go:340] unable to execute * I0310 20:14:55.491856    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.3141131s)
	: template: * I0310 20:14:55.491856    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.3141131s)
	:1:102: executing "* I0310 20:14:55.491856    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.3141131s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.168834    1228 out.go:340] unable to execute * I0310 20:14:55.533468    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5944625s)
	: template: * I0310 20:14:55.533468    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.5944625s)
	:1:102: executing "* I0310 20:14:55.533468    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.5944625s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.180007    1228 out.go:340] unable to execute * I0310 20:14:55.577510    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1171462s)
	: template: * I0310 20:14:55.577510    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1171462s)
	:1:102: executing "* I0310 20:14:55.577510    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.1171462s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.189924    1228 out.go:340] unable to execute * I0310 20:14:55.578815    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0938525s)
	: template: * I0310 20:14:55.578815    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0938525s)
	:1:102: executing "* I0310 20:14:55.578815    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.0938525s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.441402    1228 out.go:340] unable to execute * I0310 20:15:02.279417    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:02.279417    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:02.279417    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.465746    1228 out.go:340] unable to execute * I0310 20:15:02.353738    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:02.353738    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:02.353738    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.474214    1228 out.go:340] unable to execute * I0310 20:15:02.390095    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:02.390095    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:02.390095    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.481867    1228 out.go:340] unable to execute * I0310 20:15:02.432041    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:02.432041    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:02.432041    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.506440    1228 out.go:340] unable to execute * I0310 20:15:02.796764    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:02.796764    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:02.796764    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.629533    1228 out.go:340] unable to execute * I0310 20:15:03.052299    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.052299    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.052299    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.636497    1228 out.go:340] unable to execute * I0310 20:15:03.060986    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.060986    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.060986    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.653742    1228 out.go:340] unable to execute * I0310 20:15:03.095204    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.095204    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.095204    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.662761    1228 out.go:340] unable to execute * I0310 20:15:03.117987    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.117987    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.117987    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.670135    1228 out.go:340] unable to execute * I0310 20:15:03.123172    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.123172    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.123172    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.698724    1228 out.go:340] unable to execute * I0310 20:15:03.169887    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.169887    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.169887    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.708445    1228 out.go:340] unable to execute * I0310 20:15:03.170846    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.170846    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.170846    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.716915    1228 out.go:340] unable to execute * I0310 20:15:03.170846    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.170846    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.170846    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.724923    1228 out.go:340] unable to execute * I0310 20:15:03.176401    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.176401    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.176401    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.733440    1228 out.go:340] unable to execute * I0310 20:15:03.176401    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.176401    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.176401    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.740418    1228 out.go:340] unable to execute * I0310 20:15:03.179071    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.179071    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.179071    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.750178    1228 out.go:340] unable to execute * I0310 20:15:03.184249    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:03.184249    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:03.184249    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.794711    1228 out.go:340] unable to execute * I0310 20:15:04.174901    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0569182s)
	: template: * I0310 20:15:04.174901    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0569182s)
	:1:102: executing "* I0310 20:15:04.174901    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.0569182s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.807449    1228 out.go:340] unable to execute * I0310 20:15:04.197105    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.026263s)
	: template: * I0310 20:15:04.197105    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.026263s)
	:1:102: executing "* I0310 20:15:04.197105    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.026263s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.823585    1228 out.go:340] unable to execute * I0310 20:15:04.228118    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0438744s)
	: template: * I0310 20:15:04.228118    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0438744s)
	:1:102: executing "* I0310 20:15:04.228118    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.0438744s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.832584    1228 out.go:340] unable to execute * I0310 20:15:04.233746    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0546794s)
	: template: * I0310 20:15:04.233746    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0546794s)
	:1:102: executing "* I0310 20:15:04.233746    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.0546794s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.983434    1228 out.go:340] unable to execute * I0310 20:15:04.270902    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.270902    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.270902    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.990548    1228 out.go:340] unable to execute * I0310 20:15:04.271664    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.271664    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.271664    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:39.997519    1228 out.go:340] unable to execute * I0310 20:15:04.273360    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.273360    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.273360    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.004511    1228 out.go:340] unable to execute * I0310 20:15:04.273561    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.273561    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.273561    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.011989    1228 out.go:340] unable to execute * I0310 20:15:04.277150    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.277150    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.277150    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.316478    1228 out.go:340] unable to execute * I0310 20:15:04.817788    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.817788    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.817788    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.322491    1228 out.go:340] unable to execute * I0310 20:15:04.818795    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.818795    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.818795    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.329489    1228 out.go:340] unable to execute * I0310 20:15:04.819798    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.819798    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.819798    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.335493    1228 out.go:340] unable to execute * I0310 20:15:04.824799    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.824799    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.824799    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.343164    1228 out.go:340] unable to execute * I0310 20:15:04.827908    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.827908    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.827908    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.349778    1228 out.go:340] unable to execute * I0310 20:15:04.833090    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.833090    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.833090    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.356108    1228 out.go:340] unable to execute * I0310 20:15:04.834141    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.834141    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.834141    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.364109    1228 out.go:340] unable to execute * I0310 20:15:04.837795    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.837795    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.837795    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.373111    1228 out.go:340] unable to execute * I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.380128    1228 out.go:340] unable to execute * I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.386127    1228 out.go:340] unable to execute * I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.842216    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.393817    1228 out.go:340] unable to execute * I0310 20:15:04.846674    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	: template: * I0310 20:15:04.846674    7892 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496
	:1:96: executing "* I0310 20:15:04.846674    7892 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.435843    1228 out.go:340] unable to execute * I0310 20:15:05.838850    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0190562s)
	: template: * I0310 20:15:05.838850    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0190562s)
	:1:102: executing "* I0310 20:15:05.838850    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.0190562s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.444818    1228 out.go:340] unable to execute * I0310 20:15:05.847638    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0288477s)
	: template: * I0310 20:15:05.847638    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0288477s)
	:1:102: executing "* I0310 20:15:05.847638    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.0288477s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.455354    1228 out.go:340] unable to execute * I0310 20:15:05.864028    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0262376s)
	: template: * I0310 20:15:05.864028    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.0262376s)
	:1:102: executing "* I0310 20:15:05.864028    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.0262376s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.465416    1228 out.go:340] unable to execute * I0310 20:15:05.915260    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.087356s)
	: template: * I0310 20:15:05.915260    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.087356s)
	:1:102: executing "* I0310 20:15:05.915260    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.087356s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.475023    1228 out.go:340] unable to execute * I0310 20:15:05.962206    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1199955s)
	: template: * I0310 20:15:05.962206    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1199955s)
	:1:102: executing "* I0310 20:15:05.962206    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.1199955s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.485327    1228 out.go:340] unable to execute * I0310 20:15:05.970172    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1360359s)
	: template: * I0310 20:15:05.970172    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1360359s)
	:1:102: executing "* I0310 20:15:05.970172    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.1360359s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.497096    1228 out.go:340] unable to execute * I0310 20:15:05.977603    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1528088s)
	: template: * I0310 20:15:05.977603    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1528088s)
	:1:102: executing "* I0310 20:15:05.977603    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.1528088s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.510429    1228 out.go:340] unable to execute * I0310 20:15:05.980122    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1334533s)
	: template: * I0310 20:15:05.980122    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1334533s)
	:1:102: executing "* I0310 20:15:05.980122    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.1334533s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.520345    1228 out.go:340] unable to execute * I0310 20:15:05.992126    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.149915s)
	: template: * I0310 20:15:05.992126    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.149915s)
	:1:102: executing "* I0310 20:15:05.992126    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.149915s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 20:15:40.526375    1228 out.go:340] unable to execute * I0310 20:15:05.992392    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1501816s)
	: template: * I0310 20:15:05.992392    7892 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20210310201235-6496: (1.1501816s)
	:1:102: executing "* I0310 20:15:05.992392    7892 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" skaffold-20210310201235-6496: (1.1501816s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p skaffold-20210310201235-6496 -n skaffold-20210310201235-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p skaffold-20210310201235-6496 -n skaffold-20210310201235-6496: (3.0065935s)
helpers_test.go:257: (dbg) Run:  kubectl --context skaffold-20210310201235-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestSkaffold]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context skaffold-20210310201235-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context skaffold-20210310201235-6496 describe pod : exit status 1 (195.0375ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context skaffold-20210310201235-6496 describe pod : exit status 1
helpers_test.go:171: Cleaning up "skaffold-20210310201235-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p skaffold-20210310201235-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p skaffold-20210310201235-6496: (11.2862413s)
--- FAIL: TestSkaffold (201.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (3334.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:114: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.701074565.exe start -p running-upgrade-20210310201637-6496 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:114: (dbg) Done: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.0.701074565.exe start -p running-upgrade-20210310201637-6496 --memory=2200 --vm-driver=docker: (51m9.1707695s)
version_upgrade_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20210310201637-6496 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:124: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-20210310201637-6496 --memory=2200 --alsologtostderr -v=1 --driver=docker: exit status 1 (3m50.0753657s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-20210310201637-6496 in cluster running-upgrade-20210310201637-6496
	* Updating the running docker "running-upgrade-20210310201637-6496" container ...
	* Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 21:07:48.002632    2000 out.go:239] Setting OutFile to fd 1652 ...
	I0310 21:07:48.003642    2000 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:07:48.003642    2000 out.go:252] Setting ErrFile to fd 2836...
	I0310 21:07:48.003642    2000 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:07:48.045069    2000 out.go:246] Setting JSON to false
	I0310 21:07:48.054293    2000 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":35933,"bootTime":1615374535,"procs":117,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 21:07:48.055299    2000 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 21:07:48.060322    2000 out.go:129] * [running-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 21:07:48.069818    2000 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 21:07:48.074735    2000 start_flags.go:453] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 21:07:48.078293    2000 out.go:129] * Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
	I0310 21:07:48.078293    2000 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 21:07:48.643202    2000 docker.go:119] docker version: linux-20.10.2
	I0310 21:07:48.656453    2000 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:07:49.793426    2000 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1362263s)
	I0310 21:07:49.795031    2000 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:9 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:136 OomKillDisable:true NGoroutines:83 SystemTime:2021-03-10 21:07:49.3268937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:07:49.803161    2000 out.go:129] * Using the docker driver based on existing profile
	I0310 21:07:49.803306    2000 start.go:276] selected driver: docker
	I0310 21:07:49.803306    2000 start.go:718] validating driver "docker" against &{Name:running-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210310201637-6496 Namespace: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.8 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:07:49.803612    2000 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 21:07:51.828619    2000 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:07:52.849057    2000 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0204401s)
	I0310 21:07:52.850280    2000 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:9 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:137 OomKillDisable:true NGoroutines:83 SystemTime:2021-03-10 21:07:52.377902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:07:52.851219    2000 start_flags.go:398] config:
	{Name:running-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210310201637-6496 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.8 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:07:52.855810    2000 out.go:129] * Starting control plane node running-upgrade-20210310201637-6496 in cluster running-upgrade-20210310201637-6496
	I0310 21:07:53.515872    2000 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 21:07:53.516210    2000 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 21:07:53.516363    2000 preload.go:97] Checking if preload exists for k8s version v1.18.0 and runtime docker
	W0310 21:07:53.631003    2000 preload.go:118] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.18.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0310 21:07:53.631466    2000 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\running-upgrade-20210310201637-6496\config.json ...
	I0310 21:07:53.631770    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0310 21:07:53.631770    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0
	I0310 21:07:53.632230    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0
	I0310 21:07:53.632230    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2
	I0310 21:07:53.632230    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0
	I0310 21:07:53.632230    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0310 21:07:53.632230    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0
	I0310 21:07:53.632385    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0
	I0310 21:07:53.631770    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns:1.6.7 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7
	I0310 21:07:53.632533    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4
	I0310 21:07:53.663550    2000 cache.go:185] Successfully downloaded all kic artifacts
	I0310 21:07:53.665008    2000 start.go:313] acquiring machines lock for running-upgrade-20210310201637-6496: {Name:mkadafd569b31b7088ef8c9d5ae99a588890ad17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:53.666952    2000 start.go:317] acquired machines lock for "running-upgrade-20210310201637-6496" in 940.7??s
	I0310 21:07:53.667190    2000 start.go:93] Skipping create...Using existing machine configuration
	I0310 21:07:53.667427    2000 fix.go:55] fixHost starting: m01
	I0310 21:07:53.790937    2000 cli_runner.go:115] Run: docker container inspect running-upgrade-20210310201637-6496 --format={{.State.Status}}
	I0310 21:07:53.938838    2000 cache.go:93] acquiring lock: {Name:mkd1a3345075d89bd4426b52f164cc77480ec169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:53.939172    2000 cache.go:93] acquiring lock: {Name:mkeb51a7f4d902422b144c2acaf6602ffeeda50b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:53.939395    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0 exists
	I0310 21:07:53.940764    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0 exists
	I0310 21:07:53.940764    2000 cache.go:82] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.18.0" took 308.3796ms
	I0310 21:07:53.940764    2000 cache.go:66] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0 succeeded
	I0310 21:07:53.940764    2000 cache.go:82] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.18.0" took 308.8341ms
	I0310 21:07:53.940764    2000 cache.go:66] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0 succeeded
	I0310 21:07:53.944738    2000 cache.go:93] acquiring lock: {Name:mk95277aa1d8baa6ce693324ce93a259561b3b0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:53.945796    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 exists
	I0310 21:07:53.946483    2000 cache.go:82] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.4" took 314.7137ms
	I0310 21:07:53.946483    2000 cache.go:66] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 succeeded
	I0310 21:07:53.964845    2000 cache.go:93] acquiring lock: {Name:mkaf6817d1570cac8e9e1902b52a9b2c5b9dc038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:53.965473    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0 exists
	I0310 21:07:53.965723    2000 cache.go:82] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.4.3-0" took 322.6969ms
	I0310 21:07:53.965723    2000 cache.go:66] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I0310 21:07:53.967608    2000 cache.go:93] acquiring lock: {Name:mk9c8fd7ef36525ddfab354a7672d9c092c5ea53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:53.968909    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0 exists
	I0310 21:07:53.969529    2000 cache.go:82] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.18.0" took 325.3186ms
	I0310 21:07:53.969768    2000 cache.go:66] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0 succeeded
	I0310 21:07:53.975185    2000 cache.go:93] acquiring lock: {Name:mkf95068147fb9802daffb44f03793cdfc94af80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:53.975929    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 exists
	I0310 21:07:53.977097    2000 cache.go:82] cache image "gcr.io/k8s-minikube/storage-provisioner:v4" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v4" took 330.4643ms
	I0310 21:07:53.977097    2000 cache.go:66] save to tar file gcr.io/k8s-minikube/storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 succeeded
	I0310 21:07:53.996725    2000 cache.go:93] acquiring lock: {Name:mk33908c5692f6fbcea93524c073786bb1491be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:53.997836    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 exists
	I0310 21:07:53.998135    2000 cache.go:82] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.1.0" took 354.8666ms
	I0310 21:07:53.998135    2000 cache.go:66] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 succeeded
	I0310 21:07:54.012434    2000 cache.go:93] acquiring lock: {Name:mk1bbd52b1d425b987a80d1b42ea65a1daa62351 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:54.012434    2000 cache.go:93] acquiring lock: {Name:mk962fa425f0feaabe16844bc3ad9ac4bf160641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:54.013384    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 exists
	I0310 21:07:54.013384    2000 cache.go:93] acquiring lock: {Name:mkbaeca4a6ec180fb6b1238846e64ebdef3e8b1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:07:54.013384    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7 exists
	I0310 21:07:54.013905    2000 cache.go:82] cache image "k8s.gcr.io/pause:3.2" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.2" took 369.962ms
	I0310 21:07:54.013905    2000 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 succeeded
	I0310 21:07:54.014130    2000 cache.go:82] cache image "k8s.gcr.io/coredns:1.6.7" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\coredns_1.6.7" took 368.4164ms
	I0310 21:07:54.014130    2000 cache.go:66] save to tar file k8s.gcr.io/coredns:1.6.7 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7 succeeded
	I0310 21:07:54.014347    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0 exists
	I0310 21:07:54.015143    2000 cache.go:82] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.18.0" took 361.6762ms
	I0310 21:07:54.015369    2000 cache.go:66] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0 succeeded
	I0310 21:07:54.015369    2000 cache.go:73] Successfully saved all images to host disk.
	I0310 21:07:54.464290    2000 fix.go:108] recreateIfNeeded on running-upgrade-20210310201637-6496: state=Running err=<nil>
	W0310 21:07:54.464290    2000 fix.go:134] unexpected machine state, will restart: <nil>
	I0310 21:07:54.469315    2000 out.go:129] * Updating the running docker "running-upgrade-20210310201637-6496" container ...
	I0310 21:07:54.469595    2000 machine.go:88] provisioning docker machine ...
	I0310 21:07:54.469734    2000 ubuntu.go:169] provisioning hostname "running-upgrade-20210310201637-6496"
	I0310 21:07:54.477388    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:07:55.117273    2000 main.go:121] libmachine: Using SSH client type: native
	I0310 21:07:55.117966    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	I0310 21:07:55.118241    2000 main.go:121] libmachine: About to run SSH command:
	sudo hostname running-upgrade-20210310201637-6496 && echo "running-upgrade-20210310201637-6496" | sudo tee /etc/hostname
	I0310 21:07:58.349788    2000 main.go:121] libmachine: SSH cmd err, output: <nil>: running-upgrade-20210310201637-6496
	
	I0310 21:07:58.358521    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:07:58.959935    2000 main.go:121] libmachine: Using SSH client type: native
	I0310 21:07:58.960714    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	I0310 21:07:58.960932    2000 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-20210310201637-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20210310201637-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-20210310201637-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 21:08:01.261113    2000 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:08:01.261113    2000 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 21:08:01.261113    2000 ubuntu.go:177] setting up certificates
	I0310 21:08:01.261113    2000 provision.go:83] configureAuth start
	I0310 21:08:01.269686    2000 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20210310201637-6496
	I0310 21:08:01.937346    2000 provision.go:137] copyHostCerts
	I0310 21:08:01.937903    2000 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 21:08:01.937903    2000 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 21:08:01.938284    2000 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 21:08:01.942055    2000 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 21:08:01.942055    2000 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 21:08:01.942055    2000 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 21:08:01.945055    2000 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 21:08:01.945055    2000 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 21:08:01.945055    2000 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 21:08:01.948045    2000 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-20210310201637-6496 san=[172.17.0.8 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20210310201637-6496]
	I0310 21:08:02.564805    2000 provision.go:165] copyRemoteCerts
	I0310 21:08:02.576618    2000 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 21:08:02.586856    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:08:03.197831    2000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55115 SSHKeyPath:C:\Users\jenkins\.minikube\machines\running-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 21:08:04.133763    2000 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.556872s)
	I0310 21:08:04.134601    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 21:08:05.987049    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 21:08:07.273753    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1277 bytes)
	I0310 21:08:08.958484    2000 provision.go:86] duration metric: configureAuth took 7.6965458s
	I0310 21:08:08.958484    2000 ubuntu.go:193] setting minikube options for container-runtime
	I0310 21:08:08.965919    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:08:09.559131    2000 main.go:121] libmachine: Using SSH client type: native
	I0310 21:08:09.559679    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	I0310 21:08:09.559807    2000 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 21:08:11.414005    2000 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 21:08:11.414302    2000 ubuntu.go:71] root file system type: overlay
	I0310 21:08:11.414863    2000 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 21:08:11.425151    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:08:12.066026    2000 main.go:121] libmachine: Using SSH client type: native
	I0310 21:08:12.066026    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	I0310 21:08:12.066026    2000 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 21:08:17.087865    2000 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 21:08:17.096629    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:08:17.710132    2000 main.go:121] libmachine: Using SSH client type: native
	I0310 21:08:17.710510    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	I0310 21:08:17.710510    2000 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 21:09:42.288268    2000 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-03-10 20:20:27.129147000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 21:08:16.968143000 +0000
	@@ -5,9 +5,12 @@
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -23,7 +26,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	
	I0310 21:09:42.289010    2000 machine.go:91] provisioned docker machine in 1m47.8195965s
	I0310 21:09:42.289010    2000 start.go:267] post-start starting for "running-upgrade-20210310201637-6496" (driver="docker")
	I0310 21:09:42.289010    2000 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 21:09:42.302980    2000 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 21:09:42.311871    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:09:42.936937    2000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55115 SSHKeyPath:C:\Users\jenkins\.minikube\machines\running-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 21:09:43.506453    2000 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2031966s)
	I0310 21:09:43.524763    2000 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 21:09:43.553349    2000 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 21:09:43.553547    2000 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 21:09:43.553547    2000 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 21:09:43.553815    2000 info.go:137] Remote host: Ubuntu 19.10
	I0310 21:09:43.554568    2000 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 21:09:43.555865    2000 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 21:09:43.564981    2000 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 21:09:43.566078    2000 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 21:09:43.578650    2000 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 21:09:43.651894    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 21:09:43.881875    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 21:09:44.098094    2000 start.go:270] post-start completed in 1.8090867s
	I0310 21:09:44.098955    2000 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 21:09:44.113999    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:09:44.698606    2000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55115 SSHKeyPath:C:\Users\jenkins\.minikube\machines\running-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 21:09:45.102060    2000 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.0031064s)
	I0310 21:09:45.102398    2000 fix.go:57] fixHost completed within 1m51.4351579s
	I0310 21:09:45.102398    2000 start.go:80] releasing machines lock for "running-upgrade-20210310201637-6496", held for 1m51.435633s
	I0310 21:09:45.116624    2000 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20210310201637-6496
	I0310 21:09:45.751234    2000 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 21:09:45.770427    2000 ssh_runner.go:149] Run: systemctl --version
	I0310 21:09:45.773519    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:09:45.780838    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:09:46.400462    2000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55115 SSHKeyPath:C:\Users\jenkins\.minikube\machines\running-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 21:09:46.413012    2000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55115 SSHKeyPath:C:\Users\jenkins\.minikube\machines\running-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 21:09:47.069397    2000 ssh_runner.go:189] Completed: systemctl --version: (1.2989714s)
	I0310 21:09:47.082453    2000 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 21:09:47.495757    2000 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:09:47.496457    2000 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.7452257s)
	I0310 21:09:47.642059    2000 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 21:09:47.655117    2000 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 21:09:47.788118    2000 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 21:09:48.169719    2000 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:09:48.259369    2000 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:09:49.452400    2000 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.1924291s)
	I0310 21:09:49.466415    2000 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 21:09:49.600435    2000 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 21:09:50.845309    2000 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (1.2448758s)
	I0310 21:09:50.849303    2000 out.go:150] * Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
	I0310 21:09:50.856297    2000 cli_runner.go:115] Run: docker exec -t running-upgrade-20210310201637-6496 dig +short host.docker.internal
	I0310 21:09:52.392617    2000 cli_runner.go:168] Completed: docker exec -t running-upgrade-20210310201637-6496 dig +short host.docker.internal: (1.5363232s)
	I0310 21:09:52.393262    2000 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 21:09:52.417394    2000 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 21:09:52.459664    2000 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:09:52.564869    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	I0310 21:09:53.179865    2000 preload.go:97] Checking if preload exists for k8s version v1.18.0 and runtime docker
	W0310 21:09:53.222897    2000 preload.go:118] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.18.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0310 21:09:53.231453    2000 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:09:53.989414    2000 docker.go:423] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20210106215525-1984
	minikube-local-cache-test:functional-20210107002220-9088
	minikube-local-cache-test:functional-20210107190945-8748
	k8s.gcr.io/kube-proxy:v1.18.0
	k8s.gcr.io/kube-apiserver:v1.18.0
	k8s.gcr.io/kube-scheduler:v1.18.0
	k8s.gcr.io/kube-controller-manager:v1.18.0
	kubernetesui/dashboard:v2.0.0-rc6
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	kindest/kindnetd:0.5.3
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	
	-- /stdout --
	I0310 21:09:53.989414    2000 docker.go:429] gcr.io/k8s-minikube/storage-provisioner:v4 wasn't preloaded
	I0310 21:09:53.989414    2000 cache_images.go:76] LoadImages start: [k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/coredns:1.6.7 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/pause:3.2 gcr.io/k8s-minikube/storage-provisioner:v4 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0310 21:09:54.048666    2000 image.go:168] retrieving image: k8s.gcr.io/pause:3.2
	I0310 21:09:54.073277    2000 image.go:168] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
	I0310 21:09:54.089923    2000 image.go:168] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I0310 21:09:54.095267    2000 image.go:168] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
	I0310 21:09:54.098719    2000 image.go:168] retrieving image: k8s.gcr.io/coredns:1.6.7
	I0310 21:09:54.110527    2000 image.go:168] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0310 21:09:54.115754    2000 image.go:176] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
	I0310 21:09:54.155707    2000 image.go:176] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
	I0310 21:09:54.156605    2000 image.go:168] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0310 21:09:54.186305    2000 image.go:168] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
	I0310 21:09:54.208775    2000 image.go:168] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 21:09:54.229701    2000 image.go:176] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Error response from daemon: reference does not exist
	I0310 21:09:54.249925    2000 image.go:176] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0310 21:09:54.250153    2000 image.go:176] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Error response from daemon: reference does not exist
	I0310 21:09:54.251049    2000 image.go:176] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Error response from daemon: reference does not exist
	I0310 21:09:54.268008    2000 image.go:176] daemon lookup for docker.io/kubernetesui/dashboard:v2.1.0: Error response from daemon: reference does not exist
	I0310 21:09:54.271589    2000 image.go:168] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0310 21:09:54.290161    2000 image.go:176] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v4: Error response from daemon: reference does not exist
	I0310 21:09:54.298134    2000 image.go:176] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Error response from daemon: reference does not exist
	W0310 21:09:54.301872    2000 image.go:185] authn lookup for k8s.gcr.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:09:54.324727    2000 image.go:176] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: reference does not exist
	W0310 21:09:54.389135    2000 image.go:185] authn lookup for k8s.gcr.io/coredns:1.6.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:09:54.452462    2000 image.go:185] authn lookup for k8s.gcr.io/kube-proxy:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:09:54.483478    2000 image.go:185] authn lookup for k8s.gcr.io/kube-scheduler:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:09:54.500627    2000 image.go:185] authn lookup for k8s.gcr.io/kube-controller-manager:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:09:54.500627    2000 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.2
	W0310 21:09:54.516058    2000 image.go:185] authn lookup for docker.io/kubernetesui/dashboard:v2.1.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:09:54.529989    2000 image.go:185] authn lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:09:54.542812    2000 image.go:185] authn lookup for k8s.gcr.io/kube-apiserver:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:09:54.553780    2000 image.go:185] authn lookup for k8s.gcr.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:09:54.560028    2000 image.go:185] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:09:54.577568    2000 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7
	I0310 21:09:54.649025    2000 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0
	I0310 21:09:54.669032    2000 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0
	I0310 21:09:54.677713    2000 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0
	I0310 21:09:54.736740    2000 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0
	I0310 21:09:54.754419    2000 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0310 21:09:54.869204    2000 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 21:09:54.912034    2000 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0310 21:09:54.912962    2000 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0310 21:09:59.262904    2000 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.2: (4.7620119s)
	I0310 21:09:59.964442    2000 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7: (5.3868825s)
	I0310 21:10:01.279400    2000 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0: (6.6100165s)
	I0310 21:10:01.366567    2000 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0: (6.7175527s)
	I0310 21:10:01.547300    2000 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0: (6.8695986s)
	I0310 21:10:01.955424    2000 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0: (7.2180678s)
	I0310 21:10:01.955939    2000 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0: (7.2015311s)
	I0310 21:10:01.986815    2000 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v4: (7.1176224s)
	I0310 21:10:01.987173    2000 cache_images.go:104] "gcr.io/k8s-minikube/storage-provisioner:v4" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v4" does not exist at hash "85069258b98ac4e9f9fbd51dfba3b4212d8cd1d79df7d2ecff44b1319ed641cb" in container runtime
	I0310 21:10:01.987408    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4
	I0310 21:10:01.987408    2000 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4
	I0310 21:10:02.011985    2000 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v4
	I0310 21:10:02.109572    2000 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4: (7.1962736s)
	I0310 21:10:02.109765    2000 cache_images.go:104] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0310 21:10:02.109977    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0310 21:10:02.109977    2000 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0310 21:10:02.110182    2000 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (7.1979544s)
	I0310 21:10:02.110182    2000 cache_images.go:104] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0310 21:10:02.110182    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0310 21:10:02.110182    2000 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0310 21:10:02.122620    2000 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0
	I0310 21:10:02.130009    2000 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0310 21:10:02.261478    2000 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v4: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v4': No such file or directory
	I0310 21:10:02.262867    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 --> /var/lib/minikube/images/storage-provisioner_v4 (8882688 bytes)
	I0310 21:10:02.263054    2000 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0310 21:10:02.262867    2000 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0310 21:10:02.263054    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (16022528 bytes)
	I0310 21:10:02.263054    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (67993600 bytes)
	I0310 21:10:08.245715    2000 docker.go:167] Loading image: /var/lib/minikube/images/storage-provisioner_v4
	I0310 21:10:08.247818    2000 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/storage-provisioner_v4
	I0310 21:10:34.032878    2000 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/storage-provisioner_v4: (25.7850991s)
	I0310 21:10:34.033353    2000 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 from cache
	I0310 21:10:34.033353    2000 docker.go:167] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0310 21:10:34.042147    2000 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0310 21:10:55.131363    2000 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/metrics-scraper_v1.0.4: (21.0892476s)
	I0310 21:10:55.131785    2000 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 from cache
	I0310 21:10:55.131785    2000 docker.go:167] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
	I0310 21:10:55.140031    2000 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/dashboard_v2.1.0

                                                
                                                
** /stderr **
version_upgrade_test.go:126: upgrade from v1.9.0 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-20210310201637-6496 --memory=2200 --alsologtostderr -v=1 --driver=docker: exit status 1

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
panic.go:617: *** TestRunningBinaryUpgrade FAILED at 2021-03-10 21:11:37.8735153 +0000 GMT m=+7637.580728801
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:227: (dbg) Run:  docker inspect running-upgrade-20210310201637-6496

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:231: (dbg) docker inspect running-upgrade-20210310201637-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "766f9c8a27cc8573079db2169509364c205bce1780f121653a32deae9e245128",
	        "Created": "2021-03-10T20:18:51.4198547Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 129874,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:18:54.1250773Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/766f9c8a27cc8573079db2169509364c205bce1780f121653a32deae9e245128/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/766f9c8a27cc8573079db2169509364c205bce1780f121653a32deae9e245128/hostname",
	        "HostsPath": "/var/lib/docker/containers/766f9c8a27cc8573079db2169509364c205bce1780f121653a32deae9e245128/hosts",
	        "LogPath": "/var/lib/docker/containers/766f9c8a27cc8573079db2169509364c205bce1780f121653a32deae9e245128/766f9c8a27cc8573079db2169509364c205bce1780f121653a32deae9e245128-json.log",
	        "Name": "/running-upgrade-20210310201637-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": [
	            "55f6fa518e2f7530aeb691e65ec5084e7de4433286b6457a7601d3251851ebe4"
	        ],
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20210310201637-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e7a45f9593380e7d207acca499298c601141ad2aa67490fd314bf3b69d2d9739-init/diff:/var/lib/docker/overlay2/2729176fde680c6cb08937895942a79a6e1c51f76a7e3d5a84446500dc7b8558/diff:/var/lib/docker/overlay2/7c9c28574388f9d00d1444994b9fac28dd5229be9db522b8fe1138f91c318581/diff:/var/lib/docker/overlay2/bfe3c6f5e76dfddfd1396f0c4d324e6dc9b79481460c422e870f3aed43ae3cf3/diff:/var/lib/docker/overlay2/30be4e7f7ffba8ec1459948336cdb1dca09bfe65ab6553e80ef07bdeb527ec2a/diff:/var/lib/docker/overlay2/36b90e9a4385265e460c6307dbdd0daa90a2fdeaacfe20bbef2bad23b55b4d2e/diff:/var/lib/docker/overlay2/00989c545c01fb65b678eeafb78d6a826c0ea44eac1434d39208fc61f165e39c/diff:/var/lib/docker/overlay2/bec75261a3574da4507178c8c4be6e1fe2fe025adb8409d6135c2a51e7b055c3/diff:/var/lib/docker/overlay2/3e7fcf6b9f34e816ba41539a5349e02d557546a72903bae043a02043363d438c/diff:/var/lib/docker/overlay2/acb3a94565839ba4000a418d126e0460fecbbee87faff0e780621158cb84d9f9/diff:/var/lib/docker/overlay2/852e92
13d6c3eccb639a71fc37949c07c1a61da4dd20607d61913aff9225a6e0/diff:/var/lib/docker/overlay2/de25f76aa2370f98adb37e218d204f48446a17e2088c78db6acb5861eb48ed50/diff:/var/lib/docker/overlay2/702c953985963f07f226317b76356bf7129c55d6a6e875db66ae6a265b517fa2/diff:/var/lib/docker/overlay2/a8469dfed234d672c3c9c64eceea846bd579dd70f971f5b8ab0c054ceb0cf632/diff:/var/lib/docker/overlay2/1cefd8009a19e5ea459a4780caf76e5c64c903502e8f88ba66d3ce31345a23bc/diff:/var/lib/docker/overlay2/899a9e14d45cfb2e470d1b6815c0f2fed6fe21debecf41c8d4c8722c4b747539/diff:/var/lib/docker/overlay2/c31da9bfc2aec7af749ca6c7314ebccfba8c3ea4bb448085be52da55bf8e9d6c/diff:/var/lib/docker/overlay2/bd4bc4fdf15943a4aaa627f39b2cb34c636430c6c59b009f1d0f22c237e054d8/diff:/var/lib/docker/overlay2/8d04bed8e06519ceda02f8b604c41bd7614c4b7ee962d0d857f125dd1b87ad9b/diff:/var/lib/docker/overlay2/5d99a9bc3a0f4170e7dbfad9ac526ced98e84efe73cf5ac92c0f2813c82eae3f/diff:/var/lib/docker/overlay2/6d5b862d76f0de854698cdcfc250f5ee5afacb30fb276e4d37bf82813c1f7cf5/diff:/var/lib/d
ocker/overlay2/2b5c29e75384b18c2a6f85cb026a640ee3ba7885ae74bdde2722709a474a19b6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e7a45f9593380e7d207acca499298c601141ad2aa67490fd314bf3b69d2d9739/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e7a45f9593380e7d207acca499298c601141ad2aa67490fd314bf3b69d2d9739/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e7a45f9593380e7d207acca499298c601141ad2aa67490fd314bf3b69d2d9739/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20210310201637-6496",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20210310201637-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20210310201637-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20210310201637-6496",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20210310201637-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8896c80cbad4aefdbaf31ab41f4408110d3740f73204a89bfc773a95a413f1cb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55115"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55114"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55113"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8896c80cbad4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "05695dac8b8c272b79a787f900f9f138825907dd586ccd9d4bf22039e02e8f4d",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.8",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:08",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "05695dac8b8c272b79a787f900f9f138825907dd586ccd9d4bf22039e02e8f4d",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.8",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:08",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20210310201637-6496 -n running-upgrade-20210310201637-6496

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20210310201637-6496 -n running-upgrade-20210310201637-6496: exit status 6 (5.778161s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:11:44.394645   18636 status.go:396] kubeconfig endpoint: extract IP: "running-upgrade-20210310201637-6496" does not appear in C:\Users\jenkins/.kube/config

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 6 (may be ok)
helpers_test.go:237: "running-upgrade-20210310201637-6496" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:171: Cleaning up "running-upgrade-20210310201637-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20210310201637-6496

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20210310201637-6496: (26.6694643s)
--- FAIL: TestRunningBinaryUpgrade (3334.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (3548.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker: exit status 109 (23m13.8024393s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	* Starting control plane node kubernetes-upgrade-20210310201637-6496 in cluster kubernetes-upgrade-20210310201637-6496
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.14.0 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:16:38.078641    8464 out.go:239] Setting OutFile to fd 2696 ...
	I0310 20:16:38.080650    8464 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:16:38.080650    8464 out.go:252] Setting ErrFile to fd 2936...
	I0310 20:16:38.080650    8464 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:16:38.142649    8464 out.go:246] Setting JSON to false
	I0310 20:16:38.147654    8464 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":32864,"bootTime":1615374534,"procs":116,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:16:38.147654    8464 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:16:38.153735    8464 out.go:129] * [kubernetes-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:16:38.160650    8464 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:16:38.165656    8464 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:16:38.892045    8464 docker.go:119] docker version: linux-20.10.2
	I0310 20:16:38.918172    8464 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:16:40.107213    8464 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1890453s)
	I0310 20:16:40.108235    8464 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:62 SystemTime:2021-03-10 20:16:39.5686602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:40.111677    8464 out.go:129] * Using the docker driver based on user configuration
	I0310 20:16:40.111677    8464 start.go:276] selected driver: docker
	I0310 20:16:40.111677    8464 start.go:718] validating driver "docker" against <nil>
	I0310 20:16:40.111926    8464 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:16:41.299316    8464 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:16:42.520659    8464 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2213473s)
	I0310 20:16:42.520659    8464 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:56 SystemTime:2021-03-10 20:16:41.9508533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:42.521673    8464 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 20:16:42.522675    8464 start_flags.go:699] Wait components to verify : map[apiserver:true system_pods:true]
	I0310 20:16:42.522675    8464 cni.go:74] Creating CNI manager for ""
	I0310 20:16:42.522675    8464 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:16:42.522675    8464 start_flags.go:398] config:
	{Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:16:42.527669    8464 out.go:129] * Starting control plane node kubernetes-upgrade-20210310201637-6496 in cluster kubernetes-upgrade-20210310201637-6496
	I0310 20:16:43.435145    8464 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:16:43.435145    8464 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:16:43.435750    8464 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0310 20:16:43.435750    8464 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0310 20:16:43.435750    8464 cache.go:54] Caching tarball of preloaded images
	I0310 20:16:43.436463    8464 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 20:16:43.436463    8464 cache.go:57] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0310 20:16:43.437217    8464 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\config.json ...
	I0310 20:16:43.437611    8464 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\config.json: {Name:mk7b7988810f125396da5328a2570942e57f9a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:16:43.458664    8464 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:16:43.459881    8464 start.go:313] acquiring machines lock for kubernetes-upgrade-20210310201637-6496: {Name:mkf139d86564eb552ba6ebdc1acdb4bdc8579ad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:16:43.460308    8464 start.go:317] acquired machines lock for "kubernetes-upgrade-20210310201637-6496" in 427.5??s
	I0310 20:16:43.460519    8464 start.go:89] Provisioning new machine with config: &{Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0310 20:16:43.460693    8464 start.go:126] createHost starting for "" (driver="docker")
	I0310 20:16:43.467593    8464 out.go:150] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0310 20:16:43.468478    8464 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20210310201637-6496" (driver="docker")
	I0310 20:16:43.468825    8464 client.go:168] LocalClient.Create starting
	I0310 20:16:43.469456    8464 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 20:16:43.469869    8464 main.go:121] libmachine: Decoding PEM data...
	I0310 20:16:43.470082    8464 main.go:121] libmachine: Parsing certificate...
	I0310 20:16:43.470605    8464 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 20:16:43.471165    8464 main.go:121] libmachine: Decoding PEM data...
	I0310 20:16:43.471165    8464 main.go:121] libmachine: Parsing certificate...
	I0310 20:16:43.504026    8464 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210310201637-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 20:16:44.294659    8464 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210310201637-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 20:16:44.308610    8464 network_create.go:240] running [docker network inspect kubernetes-upgrade-20210310201637-6496] to gather additional debugging logs...
	I0310 20:16:44.308610    8464 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20210310201637-6496
	W0310 20:16:45.233240    8464 cli_runner.go:162] docker network inspect kubernetes-upgrade-20210310201637-6496 returned with exit code 1
	I0310 20:16:45.233423    8464 network_create.go:243] error running [docker network inspect kubernetes-upgrade-20210310201637-6496]: docker network inspect kubernetes-upgrade-20210310201637-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20210310201637-6496
	I0310 20:16:45.233598    8464 network_create.go:245] output of [docker network inspect kubernetes-upgrade-20210310201637-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20210310201637-6496
	
	** /stderr **
	I0310 20:16:45.257180    8464 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:16:46.096302    8464 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 20:16:46.096302    8464 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: kubernetes-upgrade-20210310201637-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 20:16:46.110148    8464 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20210310201637-6496
	W0310 20:16:46.826312    8464 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20210310201637-6496 returned with exit code 1
	W0310 20:16:46.826312    8464 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 20:16:46.847325    8464 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 20:16:48.267691    8464 cli_runner.go:168] Completed: docker ps -a --format {{.Names}}: (1.4203713s)
	I0310 20:16:48.288608    8464 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20210310201637-6496 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 20:16:49.029891    8464 oci.go:102] Successfully created a docker volume kubernetes-upgrade-20210310201637-6496
	I0310 20:16:49.048398    8464 cli_runner.go:115] Run: docker run --rm --name kubernetes-upgrade-20210310201637-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210310201637-6496 --entrypoint /usr/bin/test -v kubernetes-upgrade-20210310201637-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 20:16:54.050864    8464 cli_runner.go:168] Completed: docker run --rm --name kubernetes-upgrade-20210310201637-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210310201637-6496 --entrypoint /usr/bin/test -v kubernetes-upgrade-20210310201637-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (5.0024831s)
	I0310 20:16:54.051297    8464 oci.go:106] Successfully prepared a docker volume kubernetes-upgrade-20210310201637-6496
	I0310 20:16:54.051429    8464 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0310 20:16:54.051721    8464 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0310 20:16:54.052275    8464 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 20:16:54.054942    8464 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	I0310 20:16:54.065182    8464 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W0310 20:16:54.968907    8464 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 20:16:54.969535    8464 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20210310201637-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 20:16:55.344787    8464 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2796091s)
	I0310 20:16:55.345665    8464 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:49 SystemTime:2021-03-10 20:16:54.793648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:16:55.357895    8464 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 20:16:56.510906    8464 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1530146s)
	I0310 20:16:56.515517    8464 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20210310201637-6496 --name kubernetes-upgrade-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210310201637-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20210310201637-6496 --volume kubernetes-upgrade-20210310201637-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 20:17:03.893669    8464 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20210310201637-6496 --name kubernetes-upgrade-20210310201637-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20210310201637-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20210310201637-6496 --volume kubernetes-upgrade-20210310201637-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (7.3781761s)
	I0310 20:17:03.900870    8464 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310201637-6496 --format={{.State.Running}}
	I0310 20:17:04.652356    8464 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:05.340601    8464 cli_runner.go:115] Run: docker exec kubernetes-upgrade-20210310201637-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 20:17:07.230850    8464 cli_runner.go:168] Completed: docker exec kubernetes-upgrade-20210310201637-6496 stat /var/lib/dpkg/alternatives/iptables: (1.8902558s)
	I0310 20:17:07.231838    8464 oci.go:278] the created container "kubernetes-upgrade-20210310201637-6496" has a running status.
	I0310 20:17:07.231838    8464 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa...
	I0310 20:17:07.710525    8464 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 20:17:09.624753    8464 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:10.362066    8464 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 20:17:10.362842    8464 kic_runner.go:115] Args: [docker exec --privileged kubernetes-upgrade-20210310201637-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 20:17:12.079836    8464 kic_runner.go:124] Done: [docker exec --privileged kubernetes-upgrade-20210310201637-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.7169989s)
	I0310 20:17:12.084845    8464 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa...
	I0310 20:17:13.039382    8464 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310201637-6496 --format={{.State.Status}}
	I0310 20:17:13.773562    8464 machine.go:88] provisioning docker machine ...
	I0310 20:17:13.774015    8464 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20210310201637-6496"
	I0310 20:17:13.793373    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:14.425947    8464 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:14.440484    8464 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55111 <nil> <nil>}
	I0310 20:17:14.440484    8464 main.go:121] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210310201637-6496 && echo "kubernetes-upgrade-20210310201637-6496" | sudo tee /etc/hostname
	I0310 20:17:14.448287    8464 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 20:17:18.671959    8464 main.go:121] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210310201637-6496
	
	I0310 20:17:18.679264    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:19.325950    8464 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:19.326911    8464 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55111 <nil> <nil>}
	I0310 20:17:19.327428    8464 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210310201637-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210310201637-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210310201637-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:17:20.129340    8464 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:17:20.129799    8464 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:17:20.129799    8464 ubuntu.go:177] setting up certificates
	I0310 20:17:20.129799    8464 provision.go:83] configureAuth start
	I0310 20:17:20.141915    8464 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:20.750890    8464 provision.go:137] copyHostCerts
	I0310 20:17:20.751716    8464 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:17:20.751716    8464 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:17:20.752040    8464 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:17:20.756368    8464 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:17:20.756724    8464 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:17:20.757228    8464 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:17:20.760882    8464 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:17:20.761370    8464 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:17:20.762169    8464 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:17:20.764850    8464 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-20210310201637-6496 san=[172.17.0.6 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20210310201637-6496]
	I0310 20:17:21.041820    8464 provision.go:165] copyRemoteCerts
	I0310 20:17:21.076004    8464 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:17:21.085814    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:21.769065    8464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55111 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:22.381454    8464 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3054537s)
	I0310 20:17:22.382265    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:17:22.602474    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	I0310 20:17:22.770694    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 20:17:23.002728    8464 provision.go:86] duration metric: configureAuth took 2.8729376s
	I0310 20:17:23.002728    8464 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:17:23.013688    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:23.693879    8464 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:23.694183    8464 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55111 <nil> <nil>}
	I0310 20:17:23.694183    8464 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:17:24.267471    8464 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:17:24.268872    8464 ubuntu.go:71] root file system type: overlay
	I0310 20:17:24.269262    8464 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:17:24.280357    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:24.869490    8464 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:24.870046    8464 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55111 <nil> <nil>}
	I0310 20:17:24.870234    8464 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:17:25.655977    8464 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:17:25.657360    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:26.303537    8464 main.go:121] libmachine: Using SSH client type: native
	I0310 20:17:26.303537    8464 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55111 <nil> <nil>}
	I0310 20:17:26.303537    8464 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:17:34.116228    8464 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 20:17:25.635110000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 20:17:34.116425    8464 machine.go:91] provisioned docker machine in 20.3426533s
	I0310 20:17:34.116597    8464 client.go:171] LocalClient.Create took 50.6479348s
	I0310 20:17:34.117019    8464 start.go:168] duration metric: libmachine.API.Create for "kubernetes-upgrade-20210310201637-6496" took 50.648704s
	I0310 20:17:34.117173    8464 start.go:267] post-start starting for "kubernetes-upgrade-20210310201637-6496" (driver="docker")
	I0310 20:17:34.117173    8464 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:17:34.136671    8464 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:17:34.155331    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:34.975414    8464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55111 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:35.435917    8464 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2982667s)
	I0310 20:17:35.454686    8464 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:17:35.494112    8464 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:17:35.494290    8464 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:17:35.494290    8464 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:17:35.494290    8464 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:17:35.494695    8464 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:17:35.495487    8464 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:17:35.501486    8464 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:17:35.503801    8464 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:17:35.514952    8464 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:17:35.606108    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:17:35.833904    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:17:36.070968    8464 start.go:270] post-start completed in 1.9538013s
	I0310 20:17:36.123488    8464 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:36.967246    8464 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\config.json ...
	I0310 20:17:37.016871    8464 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:17:37.037252    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:37.815198    8464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55111 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:38.220696    8464 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.202706s)
	I0310 20:17:38.220968    8464 start.go:129] duration metric: createHost completed in 54.7604508s
	I0310 20:17:38.220968    8464 start.go:80] releasing machines lock for "kubernetes-upgrade-20210310201637-6496", held for 54.7608354s
	I0310 20:17:38.223804    8464 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:38.922521    8464 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:17:38.937968    8464 ssh_runner.go:149] Run: systemctl --version
	I0310 20:17:38.940991    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:38.944962    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:39.671974    8464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55111 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:39.697381    8464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55111 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 20:17:40.555676    8464 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6331598s)
	I0310 20:17:40.555676    8464 ssh_runner.go:189] Completed: systemctl --version: (1.6177131s)
	I0310 20:17:40.579882    8464 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:17:40.725274    8464 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:17:40.863744    8464 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:17:40.877305    8464 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:17:40.974158    8464 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:17:41.264705    8464 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:17:41.440106    8464 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:17:42.420558    8464 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 20:17:42.644755    8464 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:17:43.348686    8464 out.go:150] * Preparing Kubernetes v1.14.0 on Docker 20.10.3 ...
	I0310 20:17:43.361396    8464 cli_runner.go:115] Run: docker exec -t kubernetes-upgrade-20210310201637-6496 dig +short host.docker.internal
	I0310 20:17:44.757020    8464 cli_runner.go:168] Completed: docker exec -t kubernetes-upgrade-20210310201637-6496 dig +short host.docker.internal: (1.3950552s)
	I0310 20:17:44.757020    8464 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:17:44.767611    8464 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:17:44.854290    8464 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:17:44.988818    8464 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:17:45.659994    8464 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\client.crt
	I0310 20:17:45.667186    8464 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\client.key
	I0310 20:17:45.667186    8464 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0310 20:17:45.667186    8464 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0310 20:17:45.667186    8464 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:17:46.298111    8464 docker.go:423] Got preloaded images: 
	I0310 20:17:46.298111    8464 docker.go:429] k8s.gcr.io/kube-proxy:v1.14.0 wasn't preloaded
	I0310 20:17:46.307974    8464 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:17:46.499185    8464 ssh_runner.go:149] Run: which lz4
	I0310 20:17:46.584666    8464 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 20:17:46.667944    8464 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 20:17:46.667944    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (488333642 bytes)
	I0310 20:23:56.604432    8464 docker.go:388] Took 370.036837 seconds to copy over tarball
	I0310 20:23:56.637517    8464 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 20:24:36.771184    8464 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (40.1334533s)
	I0310 20:24:36.771484    8464 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 20:24:37.348410    8464 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:24:37.391132    8464 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3123 bytes)
	I0310 20:24:37.489116    8464 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:24:37.892891    8464 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:24:58.331986    8464 ssh_runner.go:189] Completed: sudo systemctl restart docker: (20.4134485s)
	I0310 20:24:58.339554    8464 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:24:59.016766    8464 docker.go:423] Got preloaded images: -- stdout --
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-proxy:v1.14.0
	k8s.gcr.io/kube-controller-manager:v1.14.0
	k8s.gcr.io/kube-scheduler:v1.14.0
	k8s.gcr.io/kube-apiserver:v1.14.0
	k8s.gcr.io/coredns:1.3.1
	k8s.gcr.io/etcd:3.3.10
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0310 20:24:59.017635    8464 cache_images.go:73] Images are preloaded, skipping loading
	I0310 20:24:59.040420    8464 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 20:25:00.257535    8464 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (1.2171225s)
	I0310 20:25:00.257535    8464 cni.go:74] Creating CNI manager for ""
	I0310 20:25:00.257535    8464 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:25:00.258145    8464 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 20:25:00.258145    8464 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.6 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210310201637-6496 NodeName:kubernetes-upgrade-20210310201637-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.6 CgroupDriver:cgroupfs Client
CAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 20:25:00.258145    8464 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20210310201637-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20210310201637-6496
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://172.17.0.6:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 20:25:00.259049    8464 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20210310201637-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 20:25:00.268879    8464 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0310 20:25:00.345847    8464 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 20:25:00.356521    8464 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 20:25:00.436355    8464 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (434 bytes)
	I0310 20:25:00.644445    8464 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 20:25:00.956282    8464 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1934 bytes)
	I0310 20:25:01.224754    8464 ssh_runner.go:149] Run: grep 172.17.0.6	control-plane.minikube.internal$ /etc/hosts
	I0310 20:25:01.287111    8464 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:25:01.342430    8464 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496 for IP: 172.17.0.6
	I0310 20:25:01.343258    8464 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 20:25:01.343590    8464 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 20:25:01.347330    8464 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\client.key
	I0310 20:25:01.347330    8464 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key.76cb2290
	I0310 20:25:01.347330    8464 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.crt.76cb2290 with IP's: [172.17.0.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 20:25:02.022637    8464 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.crt.76cb2290 ...
	I0310 20:25:02.022637    8464 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.crt.76cb2290: {Name:mk8b2adb752869211f8ca5b80d25c14c305854a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:02.043813    8464 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key.76cb2290 ...
	I0310 20:25:02.043813    8464 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key.76cb2290: {Name:mkdb44337530c63d47eaa4ce339764ce8494433d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:02.074287    8464 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.crt.76cb2290 -> C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.crt
	I0310 20:25:02.079346    8464 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key.76cb2290 -> C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key
	I0310 20:25:02.083441    8464 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.key
	I0310 20:25:02.084152    8464 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.crt with IP's: []
	I0310 20:25:02.507543    8464 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.crt ...
	I0310 20:25:02.507543    8464 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.crt: {Name:mk7b240b6dffdb12382c928ff039662158294fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:02.519854    8464 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.key ...
	I0310 20:25:02.519854    8464 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.key: {Name:mkf5596fbde597dbe0c5a280f085c19dfb3403b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:25:02.536450    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 20:25:02.536450    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.537450    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 20:25:02.537450    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.537450    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 20:25:02.537450    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.537450    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 20:25:02.538446    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.538446    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 20:25:02.538446    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.538446    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 20:25:02.538446    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.538446    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 20:25:02.539488    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.539488    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 20:25:02.539488    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.539488    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 20:25:02.539488    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.540446    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 20:25:02.540446    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.540446    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 20:25:02.540446    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.540446    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 20:25:02.541506    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.541506    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 20:25:02.541506    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.541506    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 20:25:02.541506    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.541506    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 20:25:02.542476    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.542476    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 20:25:02.543180    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.543416    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 20:25:02.543416    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.543416    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 20:25:02.543416    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.544443    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 20:25:02.544443    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.544443    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 20:25:02.544443    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.544443    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 20:25:02.544443    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.545428    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 20:25:02.545428    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.545428    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 20:25:02.545428    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.545428    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 20:25:02.546428    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.546428    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 20:25:02.546428    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.546428    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 20:25:02.546428    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.547426    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 20:25:02.547426    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.547426    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 20:25:02.547426    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.547426    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 20:25:02.548414    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.548414    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 20:25:02.548414    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.548414    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 20:25:02.548414    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.548414    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 20:25:02.549967    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.549967    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 20:25:02.549967    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.550402    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 20:25:02.550893    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.550893    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 20:25:02.551342    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.551342    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 20:25:02.551342    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.551948    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 20:25:02.551948    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.552417    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 20:25:02.552821    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.552821    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 20:25:02.552821    8464 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 20:25:02.553290    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 20:25:02.553697    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 20:25:02.554184    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 20:25:02.554184    8464 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 20:25:02.563554    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 20:25:02.928949    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0310 20:25:03.159794    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 20:25:03.378321    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 20:25:03.715911    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 20:25:03.940894    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 20:25:04.307458    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 20:25:04.564168    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 20:25:04.786464    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 20:25:05.153707    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 20:25:05.470403    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 20:25:05.758008    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 20:25:06.049820    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 20:25:06.310412    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 20:25:06.538863    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 20:25:06.762065    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 20:25:07.033289    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 20:25:07.354014    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 20:25:07.565975    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 20:25:07.857218    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 20:25:08.145341    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 20:25:08.402582    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 20:25:08.602712    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 20:25:08.890313    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 20:25:09.150381    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 20:25:09.334482    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 20:25:09.526091    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 20:25:09.836306    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 20:25:10.083849    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 20:25:10.486762    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 20:25:10.739373    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 20:25:10.992702    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 20:25:11.482309    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 20:25:11.773256    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 20:25:12.110898    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 20:25:12.335366    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 20:25:12.583424    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 20:25:12.830409    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 20:25:13.134456    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 20:25:13.326067    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 20:25:13.562109    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 20:25:13.920794    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 20:25:14.381377    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 20:25:14.738297    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 20:25:15.059494    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 20:25:15.343879    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 20:25:15.675765    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 20:25:16.118322    8464 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 20:25:16.467303    8464 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 20:25:16.671313    8464 ssh_runner.go:149] Run: openssl version
	I0310 20:25:16.773768    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 20:25:16.928551    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 20:25:17.030942    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 20:25:17.052020    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 20:25:17.152288    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:17.292271    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 20:25:17.472199    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 20:25:17.532457    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 20:25:17.544251    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 20:25:17.599061    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:17.751015    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 20:25:17.937214    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 20:25:17.971489    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 20:25:17.981994    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 20:25:18.104839    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:18.163605    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 20:25:18.250559    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 20:25:18.283103    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 20:25:18.309384    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 20:25:18.379974    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:18.481526    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 20:25:18.658517    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 20:25:18.720995    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 20:25:18.731013    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 20:25:18.907854    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:19.059324    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 20:25:19.211870    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 20:25:19.274974    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 20:25:19.293546    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 20:25:19.347823    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:19.496660    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 20:25:19.784999    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 20:25:19.866193    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 20:25:19.884899    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 20:25:19.950413    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:20.176484    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 20:25:20.273601    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 20:25:20.347549    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 20:25:20.358400    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 20:25:20.425092    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:20.633670    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 20:25:20.783178    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 20:25:20.825002    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 20:25:20.847685    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 20:25:20.956240    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:21.097272    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 20:25:21.224345    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 20:25:21.251315    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 20:25:21.317850    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 20:25:21.399513    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:21.627327    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 20:25:21.832437    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 20:25:21.852513    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 20:25:21.868629    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 20:25:21.933465    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:22.061704    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 20:25:22.158095    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 20:25:22.233990    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 20:25:22.253057    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 20:25:22.315825    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:22.453603    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 20:25:22.763099    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 20:25:22.807884    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 20:25:22.846869    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 20:25:22.933415    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:23.154998    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 20:25:23.252046    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 20:25:23.346084    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 20:25:23.360179    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 20:25:23.501888    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:23.659304    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 20:25:23.833582    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 20:25:23.905441    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 20:25:23.924542    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 20:25:23.979898    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:24.071876    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 20:25:24.240412    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 20:25:24.285607    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 20:25:24.326765    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 20:25:24.495344    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:24.693450    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 20:25:24.833937    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 20:25:24.881130    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 20:25:24.918370    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 20:25:25.034693    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:25.169739    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 20:25:25.277093    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 20:25:25.362337    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 20:25:25.376663    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 20:25:25.470577    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:25.599953    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 20:25:25.777938    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 20:25:25.841235    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 20:25:25.867746    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 20:25:25.937863    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:26.086618    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 20:25:26.269554    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 20:25:26.320639    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 20:25:26.340959    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 20:25:26.411933    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:26.549135    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 20:25:26.692570    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 20:25:26.738016    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 20:25:26.757980    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 20:25:26.873881    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:27.030014    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 20:25:27.119172    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 20:25:27.161124    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 20:25:27.181686    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 20:25:27.266222    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:27.347468    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 20:25:27.422705    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 20:25:27.570614    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 20:25:27.578640    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 20:25:27.711777    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:27.892819    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 20:25:28.063636    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 20:25:28.119243    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 20:25:28.142218    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 20:25:28.259493    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:28.373618    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 20:25:28.485830    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 20:25:28.566693    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 20:25:28.578350    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 20:25:28.653390    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:28.821358    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 20:25:29.040232    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 20:25:29.100476    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 20:25:29.112352    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 20:25:29.210026    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:29.359218    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 20:25:29.571962    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 20:25:29.629175    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 20:25:29.640101    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 20:25:29.789734    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:29.898835    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 20:25:30.074235    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 20:25:30.096650    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 20:25:30.105500    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 20:25:30.177053    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:30.302161    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 20:25:30.428003    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 20:25:30.473054    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 20:25:30.482650    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 20:25:30.572193    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:30.656978    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 20:25:30.798731    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 20:25:30.864750    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 20:25:30.876452    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 20:25:30.989815    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:31.133389    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 20:25:31.334500    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 20:25:31.407718    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 20:25:31.420440    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 20:25:31.500790    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:31.674918    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 20:25:31.860492    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 20:25:31.907471    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 20:25:31.918636    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 20:25:32.013820    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:32.142321    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 20:25:32.341244    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 20:25:32.413641    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 20:25:32.425433    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 20:25:32.495723    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:32.574935    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 20:25:32.773398    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 20:25:32.815362    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 20:25:32.847552    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 20:25:32.944737    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:33.041196    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 20:25:33.120155    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 20:25:33.150091    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 20:25:33.161999    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 20:25:33.289600    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:33.380680    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 20:25:33.478229    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 20:25:33.548604    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 20:25:33.596170    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 20:25:33.665300    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:33.724125    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 20:25:33.797659    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 20:25:33.840771    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 20:25:33.852578    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 20:25:33.911454    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:33.999327    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 20:25:34.083032    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:34.131383    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:34.158454    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:25:34.257338    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 20:25:34.339136    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 20:25:34.401012    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 20:25:34.439227    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 20:25:34.468947    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 20:25:34.538147    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:34.646774    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 20:25:34.742250    8464 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 20:25:34.782380    8464 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 20:25:34.793103    8464 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 20:25:34.856069    8464 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 20:25:34.948699    8464 kubeadm.go:385] StartCluster: {Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:25:34.959168    8464 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:25:35.384091    8464 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 20:25:35.472066    8464 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:25:35.547795    8464 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:25:35.560244    8464 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:25:35.716619    8464 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:25:35.716790    8464 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 20:25:44.610714    8464 out.go:150]   - Generating certificates and keys ...
	I0310 20:26:27.731392    8464 out.go:150]   - Booting up control plane ...
	I0310 20:32:25.481718    8464 out.go:150]   - Configuring RBAC rules ...
	W0310 20:32:41.510291    8464 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20210310201637-6496 localhost] and IPs [172.17.0.6 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20210310201637-6496 localhost] and IPs [172.17.0.6 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[apiclient] All control plane components are healthy after 207.665313 seconds
	[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
	[upload-certs] Skipping phase. Please see --experimental-upload-certs
	[mark-control-plane] Marking the node kubernetes-upgrade-20210310201637-6496 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	[bootstrap-token] Using token: v0r2vq.jepikuax4idgj7xf
	[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase addon/coredns: unable to create RBAC clusterrole: Post https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterroles: http2: server sent GOAWAY and closed the connection; LastStreamID=31, ErrCode=NO_ERROR, debug=""
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20210310201637-6496 localhost] and IPs [172.17.0.6 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20210310201637-6496 localhost] and IPs [172.17.0.6 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[apiclient] All control plane components are healthy after 207.665313 seconds
	[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
	[upload-certs] Skipping phase. Please see --experimental-upload-certs
	[mark-control-plane] Marking the node kubernetes-upgrade-20210310201637-6496 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	[bootstrap-token] Using token: v0r2vq.jepikuax4idgj7xf
	[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase addon/coredns: unable to create RBAC clusterrole: Post https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterroles: http2: server sent GOAWAY and closed the connection; LastStreamID=31, ErrCode=NO_ERROR, debug=""
	
	I0310 20:32:41.511003    8464 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0310 20:33:10.812366    8464 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (29.3010904s)
	I0310 20:33:10.824404    8464 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0310 20:33:10.961234    8464 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:33:11.818763    8464 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:33:11.833514    8464 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:33:12.107569    8464 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:33:12.107730    8464 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 20:37:45.463973    8464 out.go:150]   - Generating certificates and keys ...
	I0310 20:37:45.469839    8464 out.go:150]   - Booting up control plane ...
	I0310 20:37:45.474947    8464 kubeadm.go:387] StartCluster complete in 12m10.5281925s
	I0310 20:37:45.483364    8464 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 20:37:53.491637    8464 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (8.0082858s)
	I0310 20:37:53.492037    8464 logs.go:255] 1 containers: [a58acfb13228]
	I0310 20:37:53.495916    8464 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 20:37:58.422899    8464 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (4.9259996s)
	I0310 20:37:58.422899    8464 logs.go:255] 1 containers: [d04b7875ec72]
	I0310 20:37:58.432702    8464 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 20:38:00.867496    8464 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (2.4342121s)
	I0310 20:38:00.867823    8464 logs.go:255] 0 containers: []
	W0310 20:38:00.867823    8464 logs.go:257] No container was found matching "coredns"
	I0310 20:38:00.882359    8464 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 20:38:02.684604    8464 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (1.8022484s)
	I0310 20:38:02.684604    8464 logs.go:255] 1 containers: [adb946d74113]
	I0310 20:38:02.696520    8464 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 20:38:05.697479    8464 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (3.0000826s)
	I0310 20:38:05.697479    8464 logs.go:255] 0 containers: []
	W0310 20:38:05.697479    8464 logs.go:257] No container was found matching "kube-proxy"
	I0310 20:38:05.715974    8464 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 20:38:07.255774    8464 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (1.5388278s)
	I0310 20:38:07.255774    8464 logs.go:255] 0 containers: []
	W0310 20:38:07.255774    8464 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 20:38:07.265641    8464 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 20:38:09.846105    8464 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (2.5797478s)
	I0310 20:38:09.846105    8464 logs.go:255] 0 containers: []
	W0310 20:38:09.846105    8464 logs.go:257] No container was found matching "storage-provisioner"
	I0310 20:38:09.857441    8464 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 20:38:11.677266    8464 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (1.8192811s)
	I0310 20:38:11.677475    8464 logs.go:255] 2 containers: [ea8d50515472 c96a940540ae]
	I0310 20:38:11.677475    8464 logs.go:122] Gathering logs for kube-apiserver [a58acfb13228] ...
	I0310 20:38:11.677475    8464 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 a58acfb13228"
	I0310 20:38:18.972351    8464 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 a58acfb13228": (7.2948867s)
	I0310 20:38:19.004865    8464 logs.go:122] Gathering logs for kube-controller-manager [ea8d50515472] ...
	I0310 20:38:19.004865    8464 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 ea8d50515472"
	I0310 20:38:23.253147    8464 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 ea8d50515472": (4.2482889s)
	I0310 20:38:23.253920    8464 logs.go:122] Gathering logs for Docker ...
	I0310 20:38:23.253920    8464 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 20:38:24.095015    8464 logs.go:122] Gathering logs for kubelet ...
	I0310 20:38:24.095015    8464 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 20:38:25.950450    8464 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.8554381s)
	I0310 20:38:26.050528    8464 logs.go:122] Gathering logs for dmesg ...
	I0310 20:38:26.050528    8464 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 20:38:27.134590    8464 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (1.0840637s)
	I0310 20:38:27.136980    8464 logs.go:122] Gathering logs for kube-scheduler [adb946d74113] ...
	I0310 20:38:27.136980    8464 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 adb946d74113"
	I0310 20:38:32.896260    8464 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 adb946d74113": (5.759288s)
	I0310 20:38:32.960764    8464 logs.go:122] Gathering logs for kube-controller-manager [c96a940540ae] ...
	I0310 20:38:32.960764    8464 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 c96a940540ae"
	I0310 20:38:38.474216    8464 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 c96a940540ae": (5.5134595s)
	W0310 20:38:38.474360    8464 logs.go:129] failed kube-controller-manager [c96a940540ae]: command: /bin/bash -c "docker logs --tail 400 c96a940540ae" /bin/bash -c "docker logs --tail 400 c96a940540ae": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: c96a940540ae
	 output: 
	** stderr ** 
	Error: No such container: c96a940540ae
	
	** /stderr **
	I0310 20:38:38.474652    8464 logs.go:122] Gathering logs for container status ...
	I0310 20:38:38.474959    8464 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 20:38:42.840178    8464 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (4.3652253s)
	I0310 20:38:42.840882    8464 logs.go:122] Gathering logs for describe nodes ...
	I0310 20:38:42.840882    8464 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 20:39:42.136167    8464 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (59.2949789s)
	I0310 20:39:42.147854    8464 logs.go:122] Gathering logs for etcd [d04b7875ec72] ...
	I0310 20:39:42.147854    8464 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d04b7875ec72"
	I0310 20:39:51.307491    8464 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 d04b7875ec72": (9.1596503s)
	W0310 20:39:51.339275    8464 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0310 20:39:51.340272    8464 out.go:191] * 
	* 
	W0310 20:39:51.340272    8464 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0310 20:39:51.341273    8464 out.go:191] * 
	* 
	W0310 20:39:51.341273    8464 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue:
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	W0310 20:39:51.341273    8464 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 20:39:51.346255    8464 out.go:129] 
	W0310 20:39:51.346255    8464 out.go:191] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	W0310 20:39:51.347241    8464 out.go:191] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0310 20:39:51.351297    8464 out.go:191] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0310 20:39:51.353270    8464 out.go:129] 

                                                
                                                
** /stderr **
version_upgrade_test.go:220: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker: exit status 109
version_upgrade_test.go:223: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20210310201637-6496
version_upgrade_test.go:223: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20210310201637-6496: (17.8426565s)
version_upgrade_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20210310201637-6496 status --format={{.Host}}
version_upgrade_test.go:228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20210310201637-6496 status --format={{.Host}}: exit status 7 (1.0595593s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:230: status error: exit status 7 (may be ok)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.20.5-rc.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.20.5-rc.0 --alsologtostderr -v=1 --driver=docker: exit status 109 (28m39.7349873s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20210310201637-6496 in cluster kubernetes-upgrade-20210310201637-6496
	* Restarting existing docker container for "kubernetes-upgrade-20210310201637-6496" ...
	* Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:40:11.626105    9740 out.go:239] Setting OutFile to fd 1936 ...
	I0310 20:40:11.628082    9740 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:40:11.628082    9740 out.go:252] Setting ErrFile to fd 1912...
	I0310 20:40:11.628082    9740 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:40:11.646812    9740 out.go:246] Setting JSON to false
	I0310 20:40:11.655809    9740 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":34277,"bootTime":1615374534,"procs":121,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:40:11.655809    9740 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:40:11.660981    9740 out.go:129] * [kubernetes-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:40:11.673974    9740 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:40:11.677848    9740 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:40:12.222399    9740 docker.go:119] docker version: linux-20.10.2
	I0310 20:40:12.250130    9740 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:40:13.269960    9740 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0193702s)
	I0310 20:40:13.272651    9740 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:121 OomKillDisable:true NGoroutines:94 SystemTime:2021-03-10 20:40:12.8008788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:40:13.279563    9740 out.go:129] * Using the docker driver based on existing profile
	I0310 20:40:13.279563    9740 start.go:276] selected driver: docker
	I0310 20:40:13.279563    9740 start.go:718] validating driver "docker" against &{Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:40:13.280195    9740 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:40:15.341033    9740 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:40:16.550107    9740 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.209076s)
	I0310 20:40:16.551105    9740 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:121 OomKillDisable:true NGoroutines:98 SystemTime:2021-03-10 20:40:15.8935738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:40:16.552517    9740 start_flags.go:398] config:
	{Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:40:16.562672    9740 out.go:129] * Starting control plane node kubernetes-upgrade-20210310201637-6496 in cluster kubernetes-upgrade-20210310201637-6496
	I0310 20:40:17.781711    9740 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:40:17.781947    9740 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:40:17.781947    9740 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 20:40:17.782420    9740 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4
	I0310 20:40:17.782420    9740 cache.go:54] Caching tarball of preloaded images
	I0310 20:40:17.782698    9740 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 20:40:17.782843    9740 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.5-rc.0 on docker
	I0310 20:40:17.783357    9740 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\config.json ...
	I0310 20:40:17.817163    9740 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:40:17.817793    9740 start.go:313] acquiring machines lock for kubernetes-upgrade-20210310201637-6496: {Name:mkf139d86564eb552ba6ebdc1acdb4bdc8579ad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:40:17.818641    9740 start.go:317] acquired machines lock for "kubernetes-upgrade-20210310201637-6496" in 847.7??s
	I0310 20:40:17.818854    9740 start.go:93] Skipping create...Using existing machine configuration
	I0310 20:40:17.818854    9740 fix.go:55] fixHost starting: 
	I0310 20:40:17.841819    9740 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310201637-6496 --format={{.State.Status}}
	I0310 20:40:18.441624    9740 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210310201637-6496: state=Stopped err=<nil>
	W0310 20:40:18.442825    9740 fix.go:134] unexpected machine state, will restart: <nil>
	I0310 20:40:18.452559    9740 out.go:129] * Restarting existing docker container for "kubernetes-upgrade-20210310201637-6496" ...
	I0310 20:40:18.461509    9740 cli_runner.go:115] Run: docker start kubernetes-upgrade-20210310201637-6496
	I0310 20:40:22.058935    9740 cli_runner.go:168] Completed: docker start kubernetes-upgrade-20210310201637-6496: (3.5974319s)
	I0310 20:40:22.067089    9740 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310201637-6496 --format={{.State.Status}}
	I0310 20:40:22.636962    9740 kic.go:410] container "kubernetes-upgrade-20210310201637-6496" state is running.
	I0310 20:40:22.654967    9740 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:23.276828    9740 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\config.json ...
	I0310 20:40:23.283766    9740 machine.go:88] provisioning docker machine ...
	I0310 20:40:23.283947    9740 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20210310201637-6496"
	I0310 20:40:23.291722    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:23.888238    9740 main.go:121] libmachine: Using SSH client type: native
	I0310 20:40:23.889382    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	I0310 20:40:23.889520    9740 main.go:121] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210310201637-6496 && echo "kubernetes-upgrade-20210310201637-6496" | sudo tee /etc/hostname
	I0310 20:40:23.910986    9740 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 20:40:26.922831    9740 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 20:40:31.260862    9740 main.go:121] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210310201637-6496
	
	I0310 20:40:31.285524    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:31.949152    9740 main.go:121] libmachine: Using SSH client type: native
	I0310 20:40:31.950149    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	I0310 20:40:31.950149    9740 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210310201637-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210310201637-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210310201637-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:40:33.182035    9740 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:40:33.182228    9740 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:40:33.182514    9740 ubuntu.go:177] setting up certificates
	I0310 20:40:33.182743    9740 provision.go:83] configureAuth start
	I0310 20:40:33.192626    9740 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:33.805867    9740 provision.go:137] copyHostCerts
	I0310 20:40:33.806994    9740 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:40:33.806994    9740 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:40:33.807815    9740 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:40:33.817781    9740 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:40:33.817781    9740 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:40:33.818437    9740 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:40:33.821902    9740 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:40:33.821902    9740 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:40:33.822805    9740 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:40:33.827713    9740 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-20210310201637-6496 san=[172.17.0.6 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20210310201637-6496]
	I0310 20:40:34.224036    9740 provision.go:165] copyRemoteCerts
	I0310 20:40:34.238770    9740 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:40:34.246644    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:34.940083    9740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 20:40:35.660682    9740 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4219137s)
	I0310 20:40:35.661412    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0310 20:40:36.325306    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:40:36.822365    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1285 bytes)
	I0310 20:40:37.608641    9740 provision.go:86] duration metric: configureAuth took 4.4259039s
	I0310 20:40:37.608889    9740 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:40:37.619965    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:38.264560    9740 main.go:121] libmachine: Using SSH client type: native
	I0310 20:40:38.265920    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	I0310 20:40:38.266230    9740 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:40:39.613773    9740 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:40:39.614278    9740 ubuntu.go:71] root file system type: overlay
	I0310 20:40:39.615216    9740 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:40:39.623732    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:40.303496    9740 main.go:121] libmachine: Using SSH client type: native
	I0310 20:40:40.304774    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	I0310 20:40:40.304774    9740 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:40:41.932482    9740 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:40:41.945466    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:42.589210    9740 main.go:121] libmachine: Using SSH client type: native
	I0310 20:40:42.589871    9740 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	I0310 20:40:42.589871    9740 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:40:43.918998    9740 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:40:43.918998    9740 machine.go:91] provisioned docker machine in 20.6350823s
	I0310 20:40:43.918998    9740 start.go:267] post-start starting for "kubernetes-upgrade-20210310201637-6496" (driver="docker")
	I0310 20:40:43.918998    9740 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:40:43.938642    9740 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:40:43.948150    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:44.545112    9740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 20:40:44.988836    9740 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0501955s)
	I0310 20:40:45.001842    9740 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:40:45.053472    9740 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:40:45.053472    9740 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:40:45.053472    9740 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:40:45.053472    9740 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:40:45.053637    9740 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:40:45.053898    9740 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:40:45.058110    9740 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:40:45.059992    9740 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:40:45.075969    9740 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:40:45.187189    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:40:45.805941    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:40:46.188940    9740 start.go:270] post-start completed in 2.2699458s
	I0310 20:40:46.201877    9740 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:40:46.209965    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:46.838195    9740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 20:40:47.387355    9740 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1854798s)
	I0310 20:40:47.387355    9740 fix.go:57] fixHost completed within 29.5685454s
	I0310 20:40:47.387355    9740 start.go:80] releasing machines lock for "kubernetes-upgrade-20210310201637-6496", held for 29.5687583s
	I0310 20:40:47.399744    9740 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:48.001319    9740 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:40:48.009785    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:48.027894    9740 ssh_runner.go:149] Run: systemctl --version
	I0310 20:40:48.045435    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:48.719086    9740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 20:40:48.782836    9740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 20:40:49.811562    9740 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.8100329s)
	I0310 20:40:49.811709    9740 ssh_runner.go:189] Completed: systemctl --version: (1.7835855s)
	I0310 20:40:49.832973    9740 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:40:50.108901    9740 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:40:50.273420    9740 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:40:50.285125    9740 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:40:50.431176    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:40:50.715534    9740 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:40:50.861429    9740 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:40:52.707170    9740 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.8457443s)
	I0310 20:40:52.718914    9740 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 20:40:52.862081    9740 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:40:53.882996    9740 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (1.0209171s)
	I0310 20:40:53.887364    9740 out.go:150] * Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...
	I0310 20:40:53.900335    9740 cli_runner.go:115] Run: docker exec -t kubernetes-upgrade-20210310201637-6496 dig +short host.docker.internal
	I0310 20:40:55.114769    9740 cli_runner.go:168] Completed: docker exec -t kubernetes-upgrade-20210310201637-6496 dig +short host.docker.internal: (1.2138238s)
	I0310 20:40:55.114991    9740 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:40:55.128899    9740 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:40:55.174235    9740 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:40:55.355581    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:40:55.951356    9740 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 20:40:55.951695    9740 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4
	I0310 20:40:55.964511    9740 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:40:56.841181    9740 docker.go:423] Got preloaded images: -- stdout --
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-proxy:v1.14.0
	k8s.gcr.io/kube-controller-manager:v1.14.0
	k8s.gcr.io/kube-apiserver:v1.14.0
	k8s.gcr.io/kube-scheduler:v1.14.0
	k8s.gcr.io/coredns:1.3.1
	k8s.gcr.io/etcd:3.3.10
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0310 20:40:56.841181    9740 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.5-rc.0 wasn't preloaded
	I0310 20:40:56.860168    9740 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:40:56.958942    9740 ssh_runner.go:149] Run: which lz4
	I0310 20:40:57.027629    9740 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 20:40:57.072452    9740 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 20:40:57.073663    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515786445 bytes)
	I0310 20:42:11.370633    9740 docker.go:388] Took 74.358730 seconds to copy over tarball
	I0310 20:42:11.387450    9740 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 20:42:55.962507    9740 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (44.575119s)
	I0310 20:42:55.962820    9740 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 20:42:57.905500    9740 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:42:57.977200    9740 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3145 bytes)
	I0310 20:42:58.215816    9740 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:42:59.995138    9740 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.7790493s)
	I0310 20:43:00.008630    9740 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:43:06.638441    9740 ssh_runner.go:189] Completed: sudo systemctl restart docker: (6.6298198s)
	I0310 20:43:06.652024    9740 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:43:07.606204    9740 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	
	-- /stdout --
	I0310 20:43:07.606204    9740 cache_images.go:73] Images are preloaded, skipping loading
	I0310 20:43:07.617447    9740 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 20:43:09.125267    9740 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (1.5069087s)
	I0310 20:43:09.125267    9740 cni.go:74] Creating CNI manager for ""
	I0310 20:43:09.125267    9740 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:43:09.125267    9740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 20:43:09.125267    9740 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.6 APIServerPort:8443 KubernetesVersion:v1.20.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210310201637-6496 NodeName:kubernetes-upgrade-20210310201637-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.6 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 20:43:09.134585    9740 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20210310201637-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.5-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 20:43:09.134948    9740 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20210310201637-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.5-rc.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 20:43:09.144021    9740 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.5-rc.0
	I0310 20:43:09.216706    9740 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 20:43:09.234634    9740 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 20:43:09.349630    9740 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0310 20:43:09.484300    9740 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0310 20:43:09.702141    9740 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1869 bytes)
	I0310 20:43:10.020565    9740 ssh_runner.go:149] Run: grep 172.17.0.6	control-plane.minikube.internal$ /etc/hosts
	I0310 20:43:10.089488    9740 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:43:10.265577    9740 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496 for IP: 172.17.0.6
	I0310 20:43:10.266685    9740 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 20:43:10.266685    9740 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 20:43:10.267449    9740 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\client.key
	I0310 20:43:10.267728    9740 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key.76cb2290
	I0310 20:43:10.268060    9740 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.key
	I0310 20:43:10.269677    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 20:43:10.270201    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.270201    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 20:43:10.270652    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.270652    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 20:43:10.271021    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.271021    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 20:43:10.271539    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.271539    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 20:43:10.271770    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.271770    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 20:43:10.272221    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.272221    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 20:43:10.272461    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.272645    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 20:43:10.272789    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.272996    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 20:43:10.273341    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.273341    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 20:43:10.273607    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.273769    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 20:43:10.273960    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.273960    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 20:43:10.274450    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.274637    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 20:43:10.275071    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.275240    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 20:43:10.275545    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.276229    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 20:43:10.276444    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.276720    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 20:43:10.277006    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.277169    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 20:43:10.277568    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.277734    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 20:43:10.278059    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.278347    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 20:43:10.278641    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.278786    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 20:43:10.279075    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.279345    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 20:43:10.279572    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.279572    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 20:43:10.279572    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.280212    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 20:43:10.280212    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.280620    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 20:43:10.280620    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.280620    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 20:43:10.281246    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.281246    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 20:43:10.281699    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.281699    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 20:43:10.281699    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.282301    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 20:43:10.282301    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.282699    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 20:43:10.282699    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.282699    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 20:43:10.283317    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.283317    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 20:43:10.283317    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.283713    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 20:43:10.283713    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.283713    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 20:43:10.283713    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.283713    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 20:43:10.284555    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.284555    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 20:43:10.284555    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.284555    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 20:43:10.284555    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.285524    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 20:43:10.285524    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.285524    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 20:43:10.285524    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.285524    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 20:43:10.286529    9740 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 20:43:10.286529    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 20:43:10.286529    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 20:43:10.286529    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 20:43:10.287526    9740 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 20:43:10.294033    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 20:43:10.690141    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0310 20:43:11.022056    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 20:43:11.366780    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 20:43:11.738170    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 20:43:12.041730    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 20:43:12.498359    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 20:43:12.700195    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 20:43:13.049208    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 20:43:13.310733    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 20:43:13.534309    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 20:43:13.882850    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 20:43:14.233126    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 20:43:14.484549    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 20:43:14.709281    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 20:43:15.032054    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 20:43:15.349068    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 20:43:15.682792    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 20:43:16.017709    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 20:43:16.303966    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 20:43:16.488745    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 20:43:16.814106    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 20:43:17.177281    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 20:43:17.575665    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 20:43:17.931051    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 20:43:18.253811    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 20:43:18.573008    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 20:43:18.822304    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 20:43:19.138757    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 20:43:19.376578    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 20:43:19.712680    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 20:43:20.001669    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 20:43:20.271385    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 20:43:20.502633    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 20:43:20.800521    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 20:43:21.093606    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 20:43:21.465420    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 20:43:21.784777    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 20:43:22.032356    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 20:43:22.363274    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 20:43:22.712989    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 20:43:23.099056    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 20:43:23.408408    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 20:43:23.672678    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 20:43:23.921038    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 20:43:24.342930    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 20:43:24.663179    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 20:43:24.910639    9740 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 20:43:25.200746    9740 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 20:43:25.439872    9740 ssh_runner.go:149] Run: openssl version
	I0310 20:43:25.507866    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 20:43:25.589879    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 20:43:25.620711    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 20:43:25.633770    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 20:43:25.769482    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:25.883930    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 20:43:25.965927    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 20:43:26.002150    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 20:43:26.015770    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 20:43:26.157026    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:26.314947    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 20:43:26.462308    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 20:43:26.516418    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 20:43:26.528884    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 20:43:26.664469    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:26.782235    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 20:43:26.867982    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 20:43:26.905162    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 20:43:26.917984    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 20:43:26.976936    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:27.044425    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 20:43:27.127900    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 20:43:27.154767    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 20:43:27.167534    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 20:43:27.245667    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:27.301969    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 20:43:27.458937    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 20:43:27.518220    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 20:43:27.537810    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 20:43:27.681710    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:27.782667    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 20:43:27.934572    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 20:43:27.981053    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 20:43:28.006920    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 20:43:28.097343    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:28.233259    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 20:43:28.350581    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 20:43:28.410979    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 20:43:28.433063    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 20:43:28.559700    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:28.823306    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 20:43:29.049275    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 20:43:29.100344    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 20:43:29.124107    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 20:43:29.270728    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:29.488322    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 20:43:29.633006    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 20:43:29.699773    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 20:43:29.710208    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 20:43:29.781308    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:29.849114    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 20:43:29.990898    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 20:43:30.019724    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 20:43:30.031881    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 20:43:30.110683    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:30.177329    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 20:43:30.281916    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 20:43:30.328408    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 20:43:30.353932    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 20:43:30.413554    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:30.505117    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 20:43:30.625696    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 20:43:30.657868    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 20:43:30.671673    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 20:43:30.856252    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:30.941275    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 20:43:31.034827    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 20:43:31.075529    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 20:43:31.088302    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 20:43:31.152406    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:31.232263    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 20:43:31.323994    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:43:31.356295    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:43:31.363493    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:43:31.441399    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 20:43:31.542997    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 20:43:31.735566    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 20:43:31.781889    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 20:43:31.802185    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 20:43:31.898673    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:32.025326    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 20:43:32.124993    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 20:43:32.159171    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 20:43:32.164045    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 20:43:32.242050    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:32.353777    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 20:43:32.442208    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 20:43:32.473317    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 20:43:32.488648    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 20:43:32.576064    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:32.699685    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 20:43:32.854066    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 20:43:32.923900    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 20:43:32.932913    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 20:43:33.013454    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:33.086597    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 20:43:33.207468    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 20:43:33.232721    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 20:43:33.247885    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 20:43:33.299737    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:33.378904    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 20:43:33.479043    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 20:43:33.511016    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 20:43:33.522431    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 20:43:33.590494    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:33.675096    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 20:43:33.834087    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 20:43:33.910774    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 20:43:33.921456    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 20:43:34.035403    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:34.190432    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 20:43:34.296231    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 20:43:34.333164    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 20:43:34.344921    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 20:43:34.454850    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:34.560630    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 20:43:34.674102    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 20:43:34.714367    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 20:43:34.724462    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 20:43:34.770949    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:34.825171    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 20:43:34.932747    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 20:43:34.979505    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 20:43:34.989780    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 20:43:35.074345    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:35.178899    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 20:43:35.344157    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 20:43:35.407088    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 20:43:35.421935    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 20:43:35.506568    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:35.636188    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 20:43:35.806549    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 20:43:35.845252    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 20:43:35.855067    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 20:43:35.947625    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:36.092481    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 20:43:36.231422    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 20:43:36.267498    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 20:43:36.282581    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 20:43:36.411927    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:36.510587    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 20:43:36.619525    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 20:43:36.676251    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 20:43:36.686090    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 20:43:36.748928    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:36.839144    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 20:43:36.968040    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 20:43:37.076803    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 20:43:37.090806    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 20:43:37.173879    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:37.283979    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 20:43:37.412917    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 20:43:37.496109    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 20:43:37.514167    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 20:43:37.733832    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:37.862256    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 20:43:37.946409    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 20:43:37.982328    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 20:43:37.991856    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 20:43:38.049635    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:38.138931    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 20:43:38.257549    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 20:43:38.296273    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 20:43:38.306568    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 20:43:38.353537    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:38.440358    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 20:43:38.687763    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 20:43:38.760621    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 20:43:38.775303    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 20:43:38.887508    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:39.109869    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 20:43:39.334686    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 20:43:39.381001    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 20:43:39.398216    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 20:43:39.517736    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:39.587752    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 20:43:39.654817    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 20:43:39.680405    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 20:43:39.687029    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 20:43:39.774960    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:39.851392    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 20:43:39.937768    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 20:43:39.972501    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 20:43:39.991366    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 20:43:40.063146    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:40.173682    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 20:43:40.266996    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 20:43:40.314983    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 20:43:40.326412    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 20:43:40.390447    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:40.532987    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 20:43:41.227347    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 20:43:41.296128    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 20:43:41.316518    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 20:43:41.367884    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:41.463648    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 20:43:41.626599    9740 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 20:43:41.654586    9740 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 20:43:41.662529    9740 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 20:43:41.740844    9740 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 20:43:41.796078    9740 kubeadm.go:385] StartCluster: {Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:43:41.813711    9740 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:43:42.765590    9740 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 20:43:42.904770    9740 kubeadm.go:396] found existing configuration files, will attempt cluster restart
	I0310 20:43:42.905260    9740 kubeadm.go:594] restartCluster start
	I0310 20:43:42.915361    9740 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0310 20:43:43.033241    9740 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0310 20:43:43.042892    9740 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 20:43:43.694855    9740 kubeconfig.go:117] verify returned: extract IP: "kubernetes-upgrade-20210310201637-6496" does not appear in C:\Users\jenkins/.kube/config
	I0310 20:43:43.696778    9740 kubeconfig.go:128] "kubernetes-upgrade-20210310201637-6496" context is missing from C:\Users\jenkins/.kube/config - will repair!
	I0310 20:43:43.698888    9740 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:43:43.723114    9740 kapi.go:59] client config for kubernetes-upgrade-20210310201637-6496: &rest.Config{Host:"https://127.0.0.1:55130", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins\\.minikube\\profiles\\kubernetes-upgrade-20210310201637-6496/client.crt", KeyFile:"C:\\Users\\jenkins\\.minikube\\profiles\\kubernetes-upgrade-20210310201637-6496/client.key", CAFile:"C:\\Users\\jenkins\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2611020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I0310 20:43:43.759360    9740 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0310 20:43:43.851753    9740 kubeadm.go:562] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-03-10 20:25:35.536491000 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-03-10 20:43:09.982217000 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 172.17.0.6
	@@ -17,7 +17,7 @@
	     node-ip: 172.17.0.6
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "172.17.0.6"]
	@@ -31,7 +31,7 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20210310201637-6496
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	@@ -39,8 +39,8 @@
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://172.17.0.6:2381
	-kubernetesVersion: v1.14.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.20.5-rc.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0310 20:43:43.852771    9740 kubeadm.go:1042] stopping kube-system containers ...
	I0310 20:43:43.867059    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:43:44.708564    9740 docker.go:261] Stopping containers: [66d44e1d7560 cc5bf7d7971c adb946d74113 a58acfb13228 db5e367b5040 d04b7875ec72 9c19e8c632c1 9181b2a8d2e7 e5fa65579d8b]
	I0310 20:43:44.727055    9740 ssh_runner.go:149] Run: docker stop 66d44e1d7560 cc5bf7d7971c adb946d74113 a58acfb13228 db5e367b5040 d04b7875ec72 9c19e8c632c1 9181b2a8d2e7 e5fa65579d8b
	I0310 20:43:45.531827    9740 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0310 20:43:45.663853    9740 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:43:45.745733    9740 kubeadm.go:153] found existing configuration files:
	-rw------- 1 root root 5759 Mar 10 20:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5791 Mar 10 20:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5955 Mar 10 20:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5739 Mar 10 20:33 /etc/kubernetes/scheduler.conf
	
	I0310 20:43:45.755747    9740 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0310 20:43:45.842396    9740 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0310 20:43:45.935813    9740 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0310 20:43:46.074999    9740 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0310 20:43:46.174689    9740 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:43:46.255242    9740 kubeadm.go:670] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0310 20:43:46.255242    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 20:43:49.485587    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": (3.230349s)
	I0310 20:43:49.485587    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 20:44:01.034678    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (11.5491058s)
	I0310 20:44:01.034678    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0310 20:44:05.800564    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (4.7658926s)
	I0310 20:44:05.800564    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 20:44:11.761956    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml": (5.9608953s)
	I0310 20:44:11.761956    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0310 20:44:19.173690    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml": (7.4117438s)
	I0310 20:44:19.174054    9740 api_server.go:48] waiting for apiserver process to appear ...
	I0310 20:44:19.183695    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:20.194468    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:21.198362    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:22.194196    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:23.195894    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:24.218564    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:25.191246    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:25.696318    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:26.212461    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:27.200607    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:27.694760    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:28.210995    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:28.685239    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:29.695755    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:30.189800    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:31.195367    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:31.692708    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:32.198927    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:32.699294    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:33.696165    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:34.189181    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:34.699690    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:35.198985    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:35.694964    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:36.194608    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:36.693479    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:37.192017    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:38.195773    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:38.696171    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:39.193503    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:39.694641    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:40.193287    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:40.695058    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:41.194552    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:41.699380    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:42.192351    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:42.702050    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:43.186706    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:43.693273    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:44.197929    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:44.695604    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:45.193101    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:45.693439    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:46.193176    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:46.693666    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:47.192807    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:47.694716    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:48.196565    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:48.695217    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:49.199391    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:49.703958    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:50.194118    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:50.700682    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:51.195568    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:51.693374    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:52.195337    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:52.696860    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:53.213398    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:53.706258    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:54.697767    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:55.193585    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:55.704195    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:56.202406    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:56.695722    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:57.216907    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:57.695019    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:58.201057    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:58.705627    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:59.202503    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:44:59.691383    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:00.205977    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:00.706532    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:01.201838    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:01.684381    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:02.186781    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:02.696042    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:03.194415    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:03.700992    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:04.190158    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:04.695427    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:05.198442    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:05.714591    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:06.694675    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:07.204731    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:07.705665    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:08.205101    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:08.700491    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:09.192521    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:09.705780    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:10.192570    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:10.702418    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:11.196371    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:11.695535    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:12.194937    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:12.703783    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:13.192468    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:13.698542    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:14.199015    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:14.714365    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:15.196816    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:15.690427    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:16.194198    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:16.695445    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:17.197269    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:17.708055    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:18.195592    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:18.701222    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:19.193439    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 20:45:20.600504    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (1.4070667s)
	I0310 20:45:20.600775    9740 logs.go:255] 1 containers: [cc5bf7d7971c]
	I0310 20:45:20.615980    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 20:45:21.664066    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (1.0479358s)
	I0310 20:45:21.664225    9740 logs.go:255] 1 containers: [d04b7875ec72]
	I0310 20:45:21.676123    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 20:45:23.237959    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (1.5618383s)
	I0310 20:45:23.238104    9740 logs.go:255] 0 containers: []
	W0310 20:45:23.238104    9740 logs.go:257] No container was found matching "coredns"
	I0310 20:45:23.248892    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 20:45:25.435301    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (2.1864127s)
	I0310 20:45:25.435301    9740 logs.go:255] 1 containers: [adb946d74113]
	I0310 20:45:25.454068    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 20:45:27.207756    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (1.7536894s)
	I0310 20:45:27.208044    9740 logs.go:255] 0 containers: []
	W0310 20:45:27.208044    9740 logs.go:257] No container was found matching "kube-proxy"
	I0310 20:45:27.217871    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 20:45:28.137842    9740 logs.go:255] 0 containers: []
	W0310 20:45:28.138054    9740 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 20:45:28.147172    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 20:45:29.350127    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (1.2029559s)
	I0310 20:45:29.350303    9740 logs.go:255] 0 containers: []
	W0310 20:45:29.350303    9740 logs.go:257] No container was found matching "storage-provisioner"
	I0310 20:45:29.359572    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 20:45:30.463150    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (1.1025668s)
	I0310 20:45:30.463288    9740 logs.go:255] 1 containers: [66d44e1d7560]
	I0310 20:45:30.463288    9740 logs.go:122] Gathering logs for container status ...
	I0310 20:45:30.463429    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 20:45:31.791639    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.328072s)
	I0310 20:45:31.793003    9740 logs.go:122] Gathering logs for dmesg ...
	I0310 20:45:31.794771    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 20:45:32.152567    9740 logs.go:122] Gathering logs for describe nodes ...
	I0310 20:45:32.152567    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 20:45:35.208075    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.055512s)
	W0310 20:45:35.208193    9740 logs.go:129] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0310 20:45:35.208568    9740 logs.go:122] Gathering logs for kube-apiserver [cc5bf7d7971c] ...
	I0310 20:45:35.208568    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cc5bf7d7971c"
	I0310 20:45:37.880592    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 cc5bf7d7971c": (2.6720271s)
	I0310 20:45:37.917053    9740 logs.go:122] Gathering logs for kube-controller-manager [66d44e1d7560] ...
	I0310 20:45:37.918006    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 66d44e1d7560"
	I0310 20:45:40.285234    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 66d44e1d7560": (2.3672309s)
	I0310 20:45:40.286872    9740 logs.go:122] Gathering logs for Docker ...
	I0310 20:45:40.287037    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 20:45:40.517277    9740 logs.go:122] Gathering logs for kubelet ...
	I0310 20:45:40.517456    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 20:45:41.422297    9740 logs.go:122] Gathering logs for etcd [d04b7875ec72] ...
	I0310 20:45:41.422297    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d04b7875ec72"
	I0310 20:45:45.676953    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 d04b7875ec72": (4.2546607s)
	I0310 20:45:45.709061    9740 logs.go:122] Gathering logs for kube-scheduler [adb946d74113] ...
	I0310 20:45:45.709061    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 adb946d74113"
	I0310 20:45:50.110957    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 adb946d74113": (4.4019021s)
	I0310 20:45:52.682538    9740 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 20:45:55.371189    9740 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.6886539s)
	I0310 20:45:55.371189    9740 api_server.go:68] duration metric: took 1m36.1976237s to wait for apiserver process to appear ...
	I0310 20:45:55.371189    9740 api_server.go:84] waiting for apiserver healthz status ...
	I0310 20:45:55.371189    9740 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55130/healthz ...
	I0310 20:48:24.128431    9740 api_server.go:241] https://127.0.0.1:55130/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0310 20:48:24.128706    9740 api_server.go:99] status: https://127.0.0.1:55130/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0310 20:48:24.638787    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 20:48:33.835176    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (9.1964006s)
	I0310 20:48:33.835431    9740 logs.go:255] 2 containers: [3d2c98ba1bfd cc5bf7d7971c]
	I0310 20:48:33.843993    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 20:48:37.820712    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (3.9764734s)
	I0310 20:48:37.820804    9740 logs.go:255] 3 containers: [97de25fff1e2 d32313e5411d d04b7875ec72]
	I0310 20:48:37.830843    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 20:48:44.208737    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (6.3777058s)
	I0310 20:48:44.208873    9740 logs.go:255] 0 containers: []
	W0310 20:48:44.208873    9740 logs.go:257] No container was found matching "coredns"
	I0310 20:48:44.219708    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 20:48:51.217661    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (6.9978193s)
	I0310 20:48:51.217807    9740 logs.go:255] 2 containers: [a45e8b20db73 adb946d74113]
	I0310 20:48:51.230353    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 20:48:53.407075    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (2.1767246s)
	I0310 20:48:53.407075    9740 logs.go:255] 0 containers: []
	W0310 20:48:53.407075    9740 logs.go:257] No container was found matching "kube-proxy"
	I0310 20:48:53.412664    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 20:48:56.924709    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (3.5120496s)
	I0310 20:48:56.924709    9740 logs.go:255] 0 containers: []
	W0310 20:48:56.924709    9740 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 20:48:56.935522    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 20:49:04.112402    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (7.1768899s)
	I0310 20:49:04.112402    9740 logs.go:255] 0 containers: []
	W0310 20:49:04.112402    9740 logs.go:257] No container was found matching "storage-provisioner"
	I0310 20:49:04.124363    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 20:49:11.101897    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (6.977543s)
	I0310 20:49:11.102742    9740 logs.go:255] 2 containers: [c9ee9f47c709 66d44e1d7560]
	I0310 20:49:11.102742    9740 logs.go:122] Gathering logs for kube-controller-manager [c9ee9f47c709] ...
	I0310 20:49:11.102742    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 c9ee9f47c709"
	I0310 20:49:14.514482    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 c9ee9f47c709": (3.4117446s)
	I0310 20:49:14.532865    9740 logs.go:122] Gathering logs for kube-controller-manager [66d44e1d7560] ...
	I0310 20:49:14.532865    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 66d44e1d7560"
	I0310 20:49:19.988671    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 66d44e1d7560": (5.4558127s)
	I0310 20:49:19.989681    9740 logs.go:122] Gathering logs for Docker ...
	I0310 20:49:19.990189    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 20:49:21.166750    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (1.176356s)
	I0310 20:49:21.171939    9740 logs.go:122] Gathering logs for container status ...
	I0310 20:49:21.171939    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 20:49:25.093243    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (3.9209946s)
	I0310 20:49:25.095161    9740 logs.go:122] Gathering logs for dmesg ...
	I0310 20:49:25.095161    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 20:49:26.220092    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (1.124676s)
	I0310 20:49:26.223717    9740 logs.go:122] Gathering logs for describe nodes ...
	I0310 20:49:26.223717    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 20:50:31.255440    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m5.0314153s)
	I0310 20:50:31.258702    9740 logs.go:122] Gathering logs for etcd [97de25fff1e2] ...
	I0310 20:50:31.258702    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 97de25fff1e2"
	I0310 20:50:46.314832    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 97de25fff1e2": (15.0561489s)
	I0310 20:50:46.338507    9740 logs.go:122] Gathering logs for kubelet ...
	I0310 20:50:46.338507    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 20:50:47.363943    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.0252413s)
	I0310 20:50:47.417750    9740 logs.go:122] Gathering logs for kube-apiserver [3d2c98ba1bfd] ...
	I0310 20:50:47.417750    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 3d2c98ba1bfd"
	I0310 20:51:22.483823    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 3d2c98ba1bfd": (35.0661193s)
	I0310 20:51:22.536660    9740 logs.go:122] Gathering logs for etcd [d32313e5411d] ...
	I0310 20:51:22.536660    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d32313e5411d"
	I0310 20:51:28.947838    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 d32313e5411d": (6.4111864s)
	W0310 20:51:28.948857    9740 logs.go:129] failed etcd [d32313e5411d]: command: /bin/bash -c "docker logs --tail 400 d32313e5411d" /bin/bash -c "docker logs --tail 400 d32313e5411d": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: d32313e5411d
	 output: 
	** stderr ** 
	Error: No such container: d32313e5411d
	
	** /stderr **
	I0310 20:51:28.948857    9740 logs.go:122] Gathering logs for kube-scheduler [adb946d74113] ...
	I0310 20:51:28.948857    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 adb946d74113"
	I0310 20:52:17.870040    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 adb946d74113": (48.9210907s)
	I0310 20:52:17.925077    9740 logs.go:122] Gathering logs for kube-apiserver [cc5bf7d7971c] ...
	I0310 20:52:17.925077    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cc5bf7d7971c"
	I0310 20:52:22.004064    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 cc5bf7d7971c": (4.0789923s)
	W0310 20:52:22.004064    9740 logs.go:129] failed kube-apiserver [cc5bf7d7971c]: command: /bin/bash -c "docker logs --tail 400 cc5bf7d7971c" /bin/bash -c "docker logs --tail 400 cc5bf7d7971c": Process exited with status 1
	stdout:
	
	stderr:
	Error response from daemon: can not get logs from container which is dead or marked for removal
	 output: 
	** stderr ** 
	Error response from daemon: can not get logs from container which is dead or marked for removal
	
	** /stderr **
	I0310 20:52:22.004367    9740 logs.go:122] Gathering logs for etcd [d04b7875ec72] ...
	I0310 20:52:22.004367    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d04b7875ec72"
	I0310 20:52:40.763603    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 d04b7875ec72": (18.7592607s)
	I0310 20:52:40.795755    9740 logs.go:122] Gathering logs for kube-scheduler [a45e8b20db73] ...
	I0310 20:52:40.795755    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 a45e8b20db73"
	I0310 20:52:55.573544    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 a45e8b20db73": (14.776927s)
	I0310 20:52:58.082172    9740 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55130/healthz ...
	I0310 20:53:08.558339    9740 api_server.go:241] https://127.0.0.1:55130/healthz returned 200:
	ok
	I0310 20:53:08.558717    9740 kubeadm.go:598] restartCluster took 9m25.6541917s
	W0310 20:53:08.559337    9740 out.go:191] ! Unable to restart cluster, will reset it: apiserver health: controlPlane never updated to v1.20.5-rc.0
	! Unable to restart cluster, will reset it: apiserver health: controlPlane never updated to v1.20.5-rc.0
	I0310 20:53:08.559656    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0310 20:56:41.872611    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3m33.3142315s)
	I0310 20:56:41.891287    9740 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0310 20:56:42.391581    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:56:44.068255    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (1.6766878s)
	I0310 20:56:44.077172    9740 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:56:44.371003    9740 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:56:44.380981    9740 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:56:44.841444    9740 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:56:44.841444    9740 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:01:30.736583    9740 out.go:150]   - Generating certificates and keys ...
	I0310 21:01:30.742971    9740 out.go:150]   - Booting up control plane ...
	W0310 21:01:30.746416    9740 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0310 21:01:30.746713    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0310 21:02:14.377786    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (43.6309152s)
	I0310 21:02:14.396259    9740 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0310 21:02:14.583282    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:02:15.411758    9740 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:02:15.427009    9740 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:02:15.516448    9740 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:02:15.516801    9740 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:06:50.725662    9740 out.go:150]   - Generating certificates and keys ...
	I0310 21:06:50.738637    9740 out.go:150]   - Booting up control plane ...
	I0310 21:06:50.740639    9740 kubeadm.go:387] StartCluster complete in 23m8.9493253s
	I0310 21:06:50.763337    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 21:07:00.167581    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (9.4042622s)
	I0310 21:07:00.168518    9740 logs.go:255] 1 containers: [93ec1b7fa7df]
	I0310 21:07:00.184742    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 21:07:07.095022    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (6.9102939s)
	I0310 21:07:07.095022    9740 logs.go:255] 1 containers: [894f5edfee33]
	I0310 21:07:07.109500    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 21:07:12.567847    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (5.4583574s)
	I0310 21:07:12.568411    9740 logs.go:255] 0 containers: []
	W0310 21:07:12.568411    9740 logs.go:257] No container was found matching "coredns"
	I0310 21:07:12.576357    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 21:07:19.556431    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (6.9797767s)
	I0310 21:07:19.557074    9740 logs.go:255] 1 containers: [07eedf8f3362]
	I0310 21:07:19.573248    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 21:07:26.930715    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (7.3571367s)
	I0310 21:07:26.931558    9740 logs.go:255] 0 containers: []
	W0310 21:07:26.931558    9740 logs.go:257] No container was found matching "kube-proxy"
	I0310 21:07:26.940216    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 21:07:30.747565    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (3.8073562s)
	I0310 21:07:30.748250    9740 logs.go:255] 0 containers: []
	W0310 21:07:30.748250    9740 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 21:07:30.756228    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 21:07:33.592507    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (2.8360104s)
	I0310 21:07:33.592769    9740 logs.go:255] 0 containers: []
	W0310 21:07:33.592769    9740 logs.go:257] No container was found matching "storage-provisioner"
	I0310 21:07:33.600753    9740 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 21:07:36.733241    9740 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (3.1324934s)
	I0310 21:07:36.733241    9740 logs.go:255] 2 containers: [d1a3829d9f2e 7fe915da4c34]
	I0310 21:07:36.733241    9740 logs.go:122] Gathering logs for kube-scheduler [07eedf8f3362] ...
	I0310 21:07:36.733241    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 07eedf8f3362"
	I0310 21:07:39.825044    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 07eedf8f3362": (3.0918092s)
	I0310 21:07:39.843622    9740 logs.go:122] Gathering logs for kube-controller-manager [d1a3829d9f2e] ...
	I0310 21:07:39.843900    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d1a3829d9f2e"
	I0310 21:07:45.957847    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 d1a3829d9f2e": (6.1139582s)
	I0310 21:07:45.958642    9740 logs.go:122] Gathering logs for kube-controller-manager [7fe915da4c34] ...
	I0310 21:07:45.958642    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 7fe915da4c34"
	I0310 21:07:55.573811    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 7fe915da4c34": (9.615186s)
	I0310 21:07:55.593106    9740 logs.go:122] Gathering logs for container status ...
	I0310 21:07:55.593106    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 21:07:59.199059    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (3.6052851s)
	I0310 21:07:59.199059    9740 logs.go:122] Gathering logs for kubelet ...
	I0310 21:07:59.199059    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 21:08:00.953349    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.7542929s)
	I0310 21:08:01.025695    9740 logs.go:122] Gathering logs for describe nodes ...
	I0310 21:08:01.025695    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 21:08:30.002104    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (28.9764581s)
	I0310 21:08:30.004771    9740 logs.go:122] Gathering logs for kube-apiserver [93ec1b7fa7df] ...
	I0310 21:08:30.004979    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 93ec1b7fa7df"
	I0310 21:08:38.070459    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 93ec1b7fa7df": (8.0654939s)
	I0310 21:08:38.097512    9740 logs.go:122] Gathering logs for dmesg ...
	I0310 21:08:38.097512    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 21:08:40.455317    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (2.3578088s)
	I0310 21:08:40.461996    9740 logs.go:122] Gathering logs for etcd [894f5edfee33] ...
	I0310 21:08:40.462140    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 894f5edfee33"
	I0310 21:08:49.682795    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 894f5edfee33": (9.2206713s)
	I0310 21:08:49.713386    9740 logs.go:122] Gathering logs for Docker ...
	I0310 21:08:49.713386    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 21:08:50.897027    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (1.1836434s)
	W0310 21:08:50.905310    9740 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0310 21:08:50.905777    9740 out.go:191] * 
	* 
	W0310 21:08:50.906111    9740 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 21:08:50.906647    9740 out.go:191] * 
	* 
	W0310 21:08:50.906943    9740 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue:
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	W0310 21:08:50.906943    9740 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:08:50.916017    9740 out.go:129] 
	W0310 21:08:50.916747    9740 out.go:191] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 21:08:50.917766    9740 out.go:191] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0310 21:08:50.917766    9740 out.go:191] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0310 21:08:50.920523    9740 out.go:129] 

                                                
                                                
** /stderr **
version_upgrade_test.go:241: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.20.5-rc.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:244: (dbg) Run:  kubectl --context kubernetes-upgrade-20210310201637-6496 version --output=json

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:263: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:265: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:265: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker: exit status 106 (498.4737ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.20.5-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210310201637-6496
	    minikube start -p kubernetes-upgrade-20210310201637-6496 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210310201637-64962 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.20.5-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210310201637-6496 --kubernetes-version=v1.20.5-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:269: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:271: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.20.5-rc.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.20.5-rc.0 --alsologtostderr -v=1 --driver=docker: exit status 1 (2m44.8314146s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20210310201637-6496 in cluster kubernetes-upgrade-20210310201637-6496
	* Updating the running docker "kubernetes-upgrade-20210310201637-6496" container ...
	* Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 21:08:53.198138    9176 out.go:239] Setting OutFile to fd 2604 ...
	I0310 21:08:53.199166    9176 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:08:53.200127    9176 out.go:252] Setting ErrFile to fd 2924...
	I0310 21:08:53.200127    9176 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:08:53.212360    9176 out.go:246] Setting JSON to false
	I0310 21:08:53.220251    9176 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":35999,"bootTime":1615374534,"procs":117,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 21:08:53.220377    9176 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 21:08:53.226161    9176 out.go:129] * [kubernetes-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 21:08:53.229201    9176 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 21:08:53.231142    9176 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 21:08:55.390559    9176 docker.go:119] docker version: linux-20.10.2
	I0310 21:08:55.407225    9176 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:08:58.026530    9176 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (2.6193095s)
	I0310 21:08:58.029916    9176 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:9 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:102 OomKillDisable:true NGoroutines:87 SystemTime:2021-03-10 21:08:56.8694556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:08:58.045021    9176 out.go:129] * Using the docker driver based on existing profile
	I0310 21:08:58.045021    9176 start.go:276] selected driver: docker
	I0310 21:08:58.045021    9176 start.go:718] validating driver "docker" against &{Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:08:58.045021    9176 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 21:09:00.025950    9176 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:09:01.041013    9176 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0148683s)
	I0310 21:09:01.042298    9176 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:9 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:100 OomKillDisable:true NGoroutines:76 SystemTime:2021-03-10 21:09:00.5943683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:09:01.043154    9176 start_flags.go:398] config:
	{Name:kubernetes-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:09:01.055906    9176 out.go:129] * Starting control plane node kubernetes-upgrade-20210310201637-6496 in cluster kubernetes-upgrade-20210310201637-6496
	I0310 21:09:01.659272    9176 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 21:09:01.660007    9176 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 21:09:01.660199    9176 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 21:09:01.660629    9176 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4
	I0310 21:09:01.660824    9176 cache.go:54] Caching tarball of preloaded images
	I0310 21:09:01.661020    9176 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 21:09:01.661234    9176 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.5-rc.0 on docker
	I0310 21:09:01.661421    9176 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\config.json ...
	I0310 21:09:01.667303    9176 cache.go:185] Successfully downloaded all kic artifacts
	I0310 21:09:01.667905    9176 start.go:313] acquiring machines lock for kubernetes-upgrade-20210310201637-6496: {Name:mkf139d86564eb552ba6ebdc1acdb4bdc8579ad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:09:01.668318    9176 start.go:317] acquired machines lock for "kubernetes-upgrade-20210310201637-6496" in 413??s
	I0310 21:09:01.668706    9176 start.go:93] Skipping create...Using existing machine configuration
	I0310 21:09:01.668903    9176 fix.go:55] fixHost starting: 
	I0310 21:09:01.685134    9176 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20210310201637-6496 --format={{.State.Status}}
	I0310 21:09:02.294551    9176 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210310201637-6496: state=Running err=<nil>
	W0310 21:09:02.294551    9176 fix.go:134] unexpected machine state, will restart: <nil>
	I0310 21:09:02.300170    9176 out.go:129] * Updating the running docker "kubernetes-upgrade-20210310201637-6496" container ...
	I0310 21:09:02.300170    9176 machine.go:88] provisioning docker machine ...
	I0310 21:09:02.300170    9176 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20210310201637-6496"
	I0310 21:09:02.310123    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:02.895296    9176 main.go:121] libmachine: Using SSH client type: native
	I0310 21:09:02.896040    9176 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	I0310 21:09:02.896040    9176 main.go:121] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210310201637-6496 && echo "kubernetes-upgrade-20210310201637-6496" | sudo tee /etc/hostname
	I0310 21:09:07.503920    9176 main.go:121] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210310201637-6496
	
	I0310 21:09:07.514274    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:08.129522    9176 main.go:121] libmachine: Using SSH client type: native
	I0310 21:09:08.131031    9176 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	I0310 21:09:08.131031    9176 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210310201637-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210310201637-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210310201637-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 21:09:10.282506    9176 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:09:10.283081    9176 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 21:09:10.283081    9176 ubuntu.go:177] setting up certificates
	I0310 21:09:10.283081    9176 provision.go:83] configureAuth start
	I0310 21:09:10.291167    9176 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:10.864348    9176 provision.go:137] copyHostCerts
	I0310 21:09:10.865096    9176 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 21:09:10.865096    9176 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 21:09:10.865468    9176 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 21:09:10.872722    9176 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 21:09:10.872722    9176 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 21:09:10.873164    9176 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 21:09:10.876756    9176 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 21:09:10.877011    9176 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 21:09:10.877685    9176 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 21:09:10.880790    9176 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-20210310201637-6496 san=[172.17.0.6 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20210310201637-6496]
	I0310 21:09:11.037830    9176 provision.go:165] copyRemoteCerts
	I0310 21:09:11.046828    9176 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 21:09:11.062118    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:11.616810    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 21:09:13.431939    9176 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.3851142s)
	I0310 21:09:13.433197    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 21:09:14.679642    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1285 bytes)
	I0310 21:09:15.321357    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0310 21:09:16.429346    9176 provision.go:86] duration metric: configureAuth took 6.1462749s
	I0310 21:09:16.429840    9176 ubuntu.go:193] setting minikube options for container-runtime
	I0310 21:09:16.445978    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:17.037100    9176 main.go:121] libmachine: Using SSH client type: native
	I0310 21:09:17.037927    9176 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	I0310 21:09:17.037927    9176 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 21:09:19.450285    9176 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 21:09:19.450580    9176 ubuntu.go:71] root file system type: overlay
	I0310 21:09:19.451514    9176 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 21:09:19.457561    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:20.106269    9176 main.go:121] libmachine: Using SSH client type: native
	I0310 21:09:20.107362    9176 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	I0310 21:09:20.107500    9176 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 21:09:21.351425    9176 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 21:09:21.360722    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:21.963286    9176 main.go:121] libmachine: Using SSH client type: native
	I0310 21:09:21.963589    9176 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55133 <nil> <nil>}
	I0310 21:09:21.963903    9176 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 21:09:26.189834    9176 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:09:26.190147    9176 machine.go:91] provisioned docker machine in 23.8900159s
	I0310 21:09:26.190374    9176 start.go:267] post-start starting for "kubernetes-upgrade-20210310201637-6496" (driver="docker")
	I0310 21:09:26.190374    9176 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 21:09:26.203273    9176 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 21:09:26.210405    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:26.809066    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 21:09:29.583285    9176 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (3.3796806s)
	I0310 21:09:29.593344    9176 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 21:09:29.797408    9176 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 21:09:29.797522    9176 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 21:09:29.797522    9176 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 21:09:29.797522    9176 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 21:09:29.798047    9176 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 21:09:29.798687    9176 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 21:09:29.803970    9176 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 21:09:29.806592    9176 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 21:09:29.828324    9176 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 21:09:30.583521    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 21:09:31.969203    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 21:09:32.510373    9176 start.go:270] post-start completed in 6.3200089s
	I0310 21:09:32.521563    9176 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 21:09:32.529636    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:33.133974    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 21:09:34.720430    9176 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (2.1988704s)
	I0310 21:09:34.720430    9176 fix.go:57] fixHost completed within 33.0515805s
	I0310 21:09:34.720430    9176 start.go:80] releasing machines lock for "kubernetes-upgrade-20210310201637-6496", held for 33.0519723s
	I0310 21:09:34.729428    9176 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:35.418598    9176 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 21:09:35.427538    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:35.436820    9176 ssh_runner.go:149] Run: systemctl --version
	I0310 21:09:35.444532    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:36.038856    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 21:09:36.143815    9176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55133 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kubernetes-upgrade-20210310201637-6496\id_rsa Username:docker}
	I0310 21:09:37.318965    9176 ssh_runner.go:189] Completed: systemctl --version: (1.8821482s)
	I0310 21:09:37.337199    9176 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 21:09:38.845698    9176 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service containerd: (1.5085016s)
	I0310 21:09:38.857372    9176 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:09:39.409968    9176 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 21:09:39.414887    9176 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (3.9962963s)
	I0310 21:09:39.426351    9176 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 21:09:39.528680    9176 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 21:09:41.137542    9176 ssh_runner.go:189] Completed: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml": (1.6088646s)
	I0310 21:09:41.152973    9176 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:09:41.487171    9176 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:09:47.861123    9176 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (6.3739616s)
	I0310 21:09:47.871356    9176 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 21:09:48.400109    9176 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 21:09:51.145670    9176 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (2.7455652s)
	I0310 21:09:51.148074    9176 out.go:150] * Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...
	I0310 21:09:51.148074    9176 cli_runner.go:115] Run: docker exec -t kubernetes-upgrade-20210310201637-6496 dig +short host.docker.internal
	I0310 21:09:52.780571    9176 cli_runner.go:168] Completed: docker exec -t kubernetes-upgrade-20210310201637-6496 dig +short host.docker.internal: (1.6325s)
	I0310 21:09:52.780972    9176 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 21:09:52.799566    9176 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 21:09:53.283880    9176 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20210310201637-6496
	I0310 21:09:53.927948    9176 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 21:09:53.928568    9176 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4
	I0310 21:09:53.937693    9176 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:09:58.161109    9176 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (4.2234227s)
	I0310 21:09:58.161547    9176 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	minikube-local-cache-test:functional-20210105233232-2512
	minikube-local-cache-test:functional-20210106002159-6856
	minikube-local-cache-test:functional-20210106011107-6492
	minikube-local-cache-test:functional-20210106215525-1984
	minikube-local-cache-test:functional-20210107002220-9088
	minikube-local-cache-test:functional-20210107190945-8748
	minikube-local-cache-test:functional-20210112045103-7160
	minikube-local-cache-test:functional-20210114204234-6692
	minikube-local-cache-test:functional-20210115023213-8464
	minikube-local-cache-test:functional-20210115191024-3516
	minikube-local-cache-test:functional-20210119220838-6552
	minikube-local-cache-test:functional-20210120022529-1140
	minikube-local-cache-test:functional-20210120175851-7432
	minikube-local-cache-test:functional-20210120214442-10992
	minikube-local-cache-test:functional-20210120231122-7024
	minikube-local-cache-test:functional-20210123004019-5372
	minikube-local-cache-test:functional-20210126212539-5172
	minikube-local-cache-test:functional-20210128021318-232
	minikube-local-cache-test:functional-20210212145109-352
	minikube-local-cache-test:functional-20210213143925-7440
	minikube-local-cache-test:functional-20210219145454-9520
	minikube-local-cache-test:functional-20210219220622-3920
	minikube-local-cache-test:functional-20210220004129-7452
	minikube-local-cache-test:functional-20210224014800-800
	minikube-local-cache-test:functional-20210225231842-5736
	minikube-local-cache-test:functional-20210301195830-5700
	minikube-local-cache-test:functional-20210303214129-4588
	minikube-local-cache-test:functional-20210304002630-1156
	minikube-local-cache-test:functional-20210304184021-4052
	minikube-local-cache-test:functional-20210306072141-12056
	minikube-local-cache-test:functional-20210308233820-5396
	minikube-local-cache-test:functional-20210309234032-4944
	minikube-local-cache-test:functional-20210310083645-5040
	minikube-local-cache-test:functional-20210310191609-6496
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/etcd:3.3.10
	<none>:<none>
	
	-- /stdout --
	I0310 21:09:58.161547    9176 docker.go:360] Images already preloaded, skipping extraction
	I0310 21:09:58.169779    9176 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:10:00.243433    9176 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (2.0736574s)
	I0310 21:10:00.243433    9176 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	minikube-local-cache-test:functional-20210105233232-2512
	minikube-local-cache-test:functional-20210106002159-6856
	minikube-local-cache-test:functional-20210106011107-6492
	minikube-local-cache-test:functional-20210106215525-1984
	minikube-local-cache-test:functional-20210107002220-9088
	minikube-local-cache-test:functional-20210107190945-8748
	minikube-local-cache-test:functional-20210112045103-7160
	minikube-local-cache-test:functional-20210114204234-6692
	minikube-local-cache-test:functional-20210115023213-8464
	minikube-local-cache-test:functional-20210115191024-3516
	minikube-local-cache-test:functional-20210119220838-6552
	minikube-local-cache-test:functional-20210120022529-1140
	minikube-local-cache-test:functional-20210120175851-7432
	minikube-local-cache-test:functional-20210120214442-10992
	minikube-local-cache-test:functional-20210120231122-7024
	minikube-local-cache-test:functional-20210123004019-5372
	minikube-local-cache-test:functional-20210126212539-5172
	minikube-local-cache-test:functional-20210128021318-232
	minikube-local-cache-test:functional-20210212145109-352
	minikube-local-cache-test:functional-20210213143925-7440
	minikube-local-cache-test:functional-20210219145454-9520
	minikube-local-cache-test:functional-20210219220622-3920
	minikube-local-cache-test:functional-20210220004129-7452
	minikube-local-cache-test:functional-20210224014800-800
	minikube-local-cache-test:functional-20210225231842-5736
	minikube-local-cache-test:functional-20210301195830-5700
	minikube-local-cache-test:functional-20210303214129-4588
	minikube-local-cache-test:functional-20210304002630-1156
	minikube-local-cache-test:functional-20210304184021-4052
	minikube-local-cache-test:functional-20210306072141-12056
	minikube-local-cache-test:functional-20210308233820-5396
	minikube-local-cache-test:functional-20210309234032-4944
	minikube-local-cache-test:functional-20210310083645-5040
	minikube-local-cache-test:functional-20210310191609-6496
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/etcd:3.3.10
	<none>:<none>
	
	-- /stdout --
	I0310 21:10:00.243433    9176 cache_images.go:73] Images are preloaded, skipping loading
	I0310 21:10:00.252062    9176 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 21:10:22.293925    9176 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (22.0418961s)
	I0310 21:10:22.294531    9176 cni.go:74] Creating CNI manager for ""
	I0310 21:10:22.294531    9176 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:10:22.294531    9176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 21:10:22.294989    9176 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.6 APIServerPort:8443 KubernetesVersion:v1.20.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210310201637-6496 NodeName:kubernetes-upgrade-20210310201637-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.6 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 21:10:22.295141    9176 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20210310201637-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.5-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 21:10:22.295141    9176 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20210310201637-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.5-rc.0 ClusterName:kubernetes-upgrade-20210310201637-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 21:10:22.304683    9176 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.5-rc.0
	I0310 21:10:23.052437    9176 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 21:10:23.058571    9176 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 21:10:23.762509    9176 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0310 21:10:24.508384    9176 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0310 21:10:25.149056    9176 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1869 bytes)
	I0310 21:10:25.584019    9176 ssh_runner.go:149] Run: grep 172.17.0.6	control-plane.minikube.internal$ /etc/hosts
	I0310 21:10:25.700674    9176 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496 for IP: 172.17.0.6
	I0310 21:10:25.700674    9176 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 21:10:25.700674    9176 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 21:10:25.700674    9176 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\client.key
	I0310 21:10:25.703728    9176 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key.76cb2290
	I0310 21:10:25.704120    9176 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.key
	I0310 21:10:25.706083    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 21:10:25.711001    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.711001    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 21:10:25.711001    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.711001    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 21:10:25.711932    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.711932    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 21:10:25.711932    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.711932    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 21:10:25.712882    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.712882    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 21:10:25.712882    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.712882    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 21:10:25.713895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.713895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 21:10:25.713895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.713895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 21:10:25.714982    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.714982    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 21:10:25.714982    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.714982    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 21:10:25.715936    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.715936    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 21:10:25.715936    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.715936    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 21:10:25.717087    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.717457    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 21:10:25.717867    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.717867    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 21:10:25.718210    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718602    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.718895    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 21:10:25.718895    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.723587    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 21:10:25.724251    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.724695    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 21:10:25.725370    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.725523    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 21:10:25.726079    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.726254    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 21:10:25.726918    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.727048    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 21:10:25.727458    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.727593    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 21:10:25.728280    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.729060    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 21:10:25.729505    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.729505    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 21:10:25.730223    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.730430    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 21:10:25.730768    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.730958    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 21:10:25.731606    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.731772    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 21:10:25.732114    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.732114    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 21:10:25.732692    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.732692    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 21:10:25.732901    9176 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 21:10:25.732901    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 21:10:25.733604    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 21:10:25.734088    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 21:10:25.734088    9176 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 21:10:25.741263    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 21:10:26.486723    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0310 21:10:27.464746    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 21:10:29.545593    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kubernetes-upgrade-20210310201637-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 21:10:30.710484    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 21:10:31.381093    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 21:10:32.232504    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 21:10:32.789320    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 21:10:33.472190    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 21:10:34.398968    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 21:10:35.428522    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 21:10:36.282356    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 21:10:38.145729    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 21:10:39.236145    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 21:10:40.355870    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 21:10:41.424289    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 21:10:42.397495    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 21:10:43.542612    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 21:10:45.216856    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 21:10:46.278831    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 21:10:47.119137    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 21:10:48.818321    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 21:10:51.828126    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 21:10:54.952917    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 21:10:56.576752    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 21:10:58.711992    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 21:10:59.949144    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 21:11:01.242555    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 21:11:03.976018    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 21:11:07.143443    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 21:11:08.498300    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 21:11:09.974428    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 21:11:11.570915    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 21:11:13.657230    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 21:11:16.112837    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 21:11:17.518331    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 21:11:18.568101    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 21:11:19.138791    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 21:11:21.352090    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 21:11:22.989955    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 21:11:24.758756    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 21:11:25.925924    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 21:11:27.820779    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 21:11:29.216678    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 21:11:30.926941    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 21:11:33.286513    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 21:11:35.462574    9176 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)

                                                
                                                
** /stderr **
version_upgrade_test.go:273: start after failed upgrade: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20210310201637-6496 --memory=2200 --kubernetes-version=v1.20.5-rc.0 --alsologtostderr -v=1 --driver=docker: exit status 1

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:275: *** TestKubernetesUpgrade FAILED at 2021-03-10 21:11:38.3056495 +0000 GMT m=+7638.012863701
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:227: (dbg) Run:  docker inspect kubernetes-upgrade-20210310201637-6496

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:231: (dbg) docker inspect kubernetes-upgrade-20210310201637-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ec7a23f13bb984b20bbf179083aa9701fe9851c5969b93e818f9ab3feec09f0c",
	        "Created": "2021-03-10T20:16:57.1039039Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 199854,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:40:21.9527082Z",
	            "FinishedAt": "2021-03-10T20:40:08.3674496Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/ec7a23f13bb984b20bbf179083aa9701fe9851c5969b93e818f9ab3feec09f0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ec7a23f13bb984b20bbf179083aa9701fe9851c5969b93e818f9ab3feec09f0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/ec7a23f13bb984b20bbf179083aa9701fe9851c5969b93e818f9ab3feec09f0c/hosts",
	        "LogPath": "/var/lib/docker/containers/ec7a23f13bb984b20bbf179083aa9701fe9851c5969b93e818f9ab3feec09f0c/ec7a23f13bb984b20bbf179083aa9701fe9851c5969b93e818f9ab3feec09f0c-json.log",
	        "Name": "/kubernetes-upgrade-20210310201637-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-20210310201637-6496:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d86d24babfbfc8aebcc14bc63a6ec0e3a33fe890046ef2229d12680297aebca-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d86d24babfbfc8aebcc14bc63a6ec0e3a33fe890046ef2229d12680297aebca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d86d24babfbfc8aebcc14bc63a6ec0e3a33fe890046ef2229d12680297aebca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d86d24babfbfc8aebcc14bc63a6ec0e3a33fe890046ef2229d12680297aebca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20210310201637-6496",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20210310201637-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20210310201637-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20210310201637-6496",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20210310201637-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe2abf5ca24fa5ba445a3fa9c147f28f8aec0fc9e07d884984c4021990807800",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55132"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55129"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55131"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55130"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fe2abf5ca24f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "7ab199814b92355479d2505f711264fe4ee4961e25390491c05948f571846186",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.6",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:06",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "7ab199814b92355479d2505f711264fe4ee4961e25390491c05948f571846186",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.6",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:06",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20210310201637-6496 -n kubernetes-upgrade-20210310201637-6496

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20210310201637-6496 -n kubernetes-upgrade-20210310201637-6496: (10.0108052s)
helpers_test.go:240: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20210310201637-6496 logs -n 25

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20210310201637-6496 logs -n 25: (3m12.4995582s)
helpers_test.go:248: TestKubernetesUpgrade logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:40:27 UTC, end at Wed 2021-03-10 21:12:43 UTC. --
	* Mar 10 20:43:06 kubernetes-upgrade-20210310201637-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 20:43:06 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:43:06.728506700Z" level=info msg="API listen on [::]:2376"
	* Mar 10 20:43:07 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:43:07.169894000Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 20:48:38 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:48:38.280813100Z" level=info msg="ignoring event" container=c9ee9f47c709134e9089044ab92c6e2e151f307fbe28f3c6fa85b65b80454790 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:49:15 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:49:15.762979600Z" level=info msg="ignoring event" container=d32313e5411dcf20f9f13ade42a4097174a393afa30ff828c79e627713a3f0db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:49:16 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:49:16.648717200Z" level=info msg="ignoring event" container=b8e9d1d0c648d1823bf1dafacf06a9a753f132edd3dd1ce47f32f6c3ac723198 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:54:12 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:54:12.424356100Z" level=info msg="ignoring event" container=d23238842c22e6301e17eadec598ab5ba1855deb4a96c1af0d68f888e5ef5008 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:54:43 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:54:43.142055000Z" level=info msg="ignoring event" container=97de25fff1e2546881cb91a4ebb831e3b55718bb557f2a1dfb2c5999615f10bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:55:03 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:55:03.451350000Z" level=info msg="ignoring event" container=a45e8b20db7389acab684b747700f9d3ef756bdb23fef77d664ad3582817a919 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:55:20 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:55:20.127575900Z" level=info msg="ignoring event" container=3a91fe66b8de647cbb1dd98c9333cd9dcd598268f3cd583ebc73a4315459e7cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:55:44 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:55:44.794006000Z" level=info msg="ignoring event" container=a4b2a7bf40662228ecf8ffe940e7c0ba6f614b32ab177b5cde9bb4a4d1392e76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:55:55 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:55:55.733560700Z" level=info msg="ignoring event" container=f2e32deb7c823f1653406ffd32b5dca62cae5b45707b254228bfbe0360b0cbd8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:56:13 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:56:13.584336500Z" level=info msg="Container 3d2c98ba1bfd78a8e0ff0b1a321c1d60dc3d647e4382a5c06aa4dae82edc8490 failed to exit within 10 seconds of signal 15 - using the force"
	* Mar 10 20:56:16 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:56:16.245795600Z" level=info msg="ignoring event" container=3d2c98ba1bfd78a8e0ff0b1a321c1d60dc3d647e4382a5c06aa4dae82edc8490 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:56:22 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:56:22.310402000Z" level=info msg="ignoring event" container=6e51da31fbfbf81c281b7ac2cbc499037424f26c5cadf07dd2bf41d1bee7a86f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:59:21 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T20:59:21.938913700Z" level=info msg="ignoring event" container=91d4a0fb15bfb1063ed5bb49100a2230db582602a18ec27abf86ff26a39a347d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:01:35 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T21:01:35.375859700Z" level=info msg="ignoring event" container=9255e6022064761ee51c8c795de1f2280d4a0085c089e2ad5a6f1654e536c7ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:01:59 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T21:01:59.466842300Z" level=info msg="ignoring event" container=045622dfab311a7445f716bbd860b466c50c7f4dabd4f5a6d73423d9068b5a64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:02:01 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T21:02:01.595892400Z" level=info msg="ignoring event" container=7d544ac80b8e8a7b699a18f66d976d8701ff79d3fb8fb69f9d82e171acf37e58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:02:03 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T21:02:03.607600000Z" level=info msg="ignoring event" container=0da53b9ab30cb8a8b611b7e711e43029689b32cc3a13d5c26a2836a5dfd550ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:02:06 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T21:02:06.077321600Z" level=info msg="ignoring event" container=9fc8f1b110025b5e8cc6ec09b5756925e5c389809fb3e0d7656c24a155f8fb63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:02:08 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T21:02:08.734945900Z" level=info msg="ignoring event" container=37119ba9ea7b0bf608b7ec2548f3d8b1cff192412794c5adce01b2f28f917c55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:02:10 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T21:02:10.829374300Z" level=info msg="ignoring event" container=ca209f88b4cff699dcc0c253dda88094ed86e92ab851b6d754f396d67371d4be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:02:12 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T21:02:12.585372200Z" level=info msg="ignoring event" container=52f73616c81542dc22f2b62466c76ef9ef7ba01698e4dbfe568a684782ccd60b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:06:44 kubernetes-upgrade-20210310201637-6496 dockerd[576]: time="2021-03-10T21:06:44.888657500Z" level=info msg="ignoring event" container=7fe915da4c345ba89c3a889facf893c307a452c3f493e058b920747d4addf10c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* d1a3829d9f2e5       a95b4e4b41d89       5 minutes ago       Running             kube-controller-manager   1                   a05ec8bf225a6
	* 07eedf8f3362d       4968524da7559       7 minutes ago       Running             kube-scheduler            0                   0ccd7dfc37df5
	* 7fe915da4c345       a95b4e4b41d89       7 minutes ago       Exited              kube-controller-manager   0                   a05ec8bf225a6
	* 93ec1b7fa7dfd       17a1e6e90a9b4       8 minutes ago       Running             kube-apiserver            0                   fc6b7da899018
	* 894f5edfee337       0369cf4303ffd       9 minutes ago       Running             etcd                      0                   eae5a8b6b731a
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20210310201637-6496
	* Roles:              <none>
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=kubernetes-upgrade-20210310201637-6496
	*                     kubernetes.io/os=linux
	* Annotations:        node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 21:07:09 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  kubernetes-upgrade-20210310201637-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 21:13:43 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 21:12:44 +0000   Wed, 10 Mar 2021 21:07:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 21:12:44 +0000   Wed, 10 Mar 2021 21:07:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 21:12:44 +0000   Wed, 10 Mar 2021 21:07:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 21:12:44 +0000   Wed, 10 Mar 2021 21:07:29 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.6
	*   Hostname:    kubernetes-upgrade-20210310201637-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                d92829a6-00c6-4a9a-b56f-0b54694ef805
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.5-rc.0
	*   Kube-Proxy Version:         v1.20.5-rc.0
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (4 in total)
	*   Namespace                   Name                                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                              ------------  ----------  ---------------  -------------  ---
	*   kube-system                 etcd-kubernetes-upgrade-20210310201637-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         6m10s
	*   kube-system                 kube-apiserver-kubernetes-upgrade-20210310201637-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         5m24s
	*   kube-system                 kube-controller-manager-kubernetes-upgrade-20210310201637-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	*   kube-system                 kube-scheduler-kubernetes-upgrade-20210310201637-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                650m (16%)  0 (0%)
	*   memory             100Mi (0%)  0 (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:              <none>
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [894f5edfee33] <==
	* 2021-03-10 21:12:48.402073 W | etcdserver: request "header:<ID:4428859353621656868 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.6\" mod_revision:422 > success:<request_put:<key:\"/registry/masterleases/172.17.0.6\" value_size:65 lease:4428859353621656866 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.6\" > >>" with result "size:16" took too long (405.3185ms) to execute
	* 2021-03-10 21:12:49.255060 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:12:56.804105 W | etcdserver: read-only range request "key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true " with result "range_response_count:0 size:5" took too long (190.8061ms) to execute
	* 2021-03-10 21:12:57.058117 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true " with result "range_response_count:0 size:7" took too long (292.8106ms) to execute
	* 2021-03-10 21:12:59.851717 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (221.0923ms) to execute
	* 2021-03-10 21:13:00.147480 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:1 size:644" took too long (274.0997ms) to execute
	* 2021-03-10 21:13:00.371204 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:1 size:644" took too long (228.8955ms) to execute
	* 2021-03-10 21:13:01.311882 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:117" took too long (304.3522ms) to execute
	* 2021-03-10 21:13:01.837023 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:13:04.165922 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-20210310201637-6496\" " with result "range_response_count:1 size:720" took too long (177.2429ms) to execute
	* 2021-03-10 21:13:09.355312 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:13:18.382269 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:129" took too long (133.901ms) to execute
	* 2021-03-10 21:13:23.159459 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:13:28.636411 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (192.1642ms) to execute
	* 2021-03-10 21:13:29.650934 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:13:37.407029 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:13:40.283721 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (117.1632ms) to execute
	* 2021-03-10 21:13:41.589820 W | etcdserver: request "header:<ID:4428859353621657007 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3d76781df4c8f5ae>" with result "size:41" took too long (151.1081ms) to execute
	* 2021-03-10 21:13:41.858452 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (383.7581ms) to execute
	* 2021-03-10 21:13:42.213630 W | etcdserver: request "header:<ID:4428859353621657010 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.6\" mod_revision:431 > success:<request_put:<key:\"/registry/masterleases/172.17.0.6\" value_size:65 lease:4428859353621657006 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.6\" > >>" with result "size:16" took too long (155.0379ms) to execute
	* 2021-03-10 21:13:42.727494 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:418" took too long (103.6828ms) to execute
	* 2021-03-10 21:13:48.892282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:13:50.122609 W | etcdserver: read-only range request "key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true " with result "range_response_count:0 size:5" took too long (100.7145ms) to execute
	* 2021-03-10 21:13:52.503307 W | etcdserver: read-only range request "key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true " with result "range_response_count:0 size:5" took too long (104.3462ms) to execute
	* 2021-03-10 21:13:58.469498 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  21:14:02 up  2:14,  0 users,  load average: 170.32, 160.29, 148.34
	* Linux kubernetes-upgrade-20210310201637-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [93ec1b7fa7df] <==
	* Trace[492941488]: [580.5416ms] [580.5416ms] END
	* I0310 21:13:27.137000       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:13:27.137216       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:13:27.137250       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 21:13:29.424258       1 trace.go:205] Trace[570530069]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:13:28.255) (total time: 1168ms):
	* Trace[570530069]: ---"initial value restored" 896ms (21:13:00.152)
	* Trace[570530069]: ---"Transaction committed" 236ms (21:13:00.424)
	* Trace[570530069]: [1.1682s] [1.1682s] END
	* I0310 21:13:33.658256       1 trace.go:205] Trace[962177564]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.20.5 (linux/amd64) kubernetes/9fdbacd,client:172.17.0.6 (10-Mar-2021 21:13:33.044) (total time: 613ms):
	* Trace[962177564]: ---"Object stored in database" 613ms (21:13:00.657)
	* Trace[962177564]: [613.8268ms] [613.8268ms] END
	* I0310 21:13:40.954107       1 trace.go:205] Trace[2095784523]: "List etcd3" key:/pods,resourceVersion:,resourceVersionMatch:,limit:0,continue: (10-Mar-2021 21:13:40.408) (total time: 545ms):
	* Trace[2095784523]: [545.1922ms] [545.1922ms] END
	* I0310 21:13:41.090791       1 trace.go:205] Trace[1385450319]: "List" url:/api/v1/pods,user-agent:kubectl/v1.20.5 (linux/amd64) kubernetes/9fdbacd,client:127.0.0.1 (10-Mar-2021 21:13:40.408) (total time: 681ms):
	* Trace[1385450319]: ---"Listing from storage done" 545ms (21:13:00.954)
	* Trace[1385450319]: ---"Writing http response done" count:4 136ms (21:13:00.090)
	* Trace[1385450319]: [681.9327ms] [681.9327ms] END
	* I0310 21:13:42.223787       1 trace.go:205] Trace[604912618]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:13:40.334) (total time: 1889ms):
	* Trace[604912618]: ---"initial value restored" 689ms (21:13:00.023)
	* Trace[604912618]: ---"Transaction prepared" 862ms (21:13:00.886)
	* Trace[604912618]: ---"Transaction committed" 337ms (21:13:00.223)
	* Trace[604912618]: [1.8895086s] [1.8895086s] END
	* I0310 21:14:12.040247       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:14:12.040433       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:14:12.040471       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [7fe915da4c34] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4951ad0, 0x12a05f200, 0x0, 0xc000868801, 0xc0000920c0)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Forever(0x4951ad0, 0x12a05f200)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
	* created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a
	* 
	* goroutine 134 [select]:
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000694010, 0x4df6e40, 0xc000c38030, 0x1, 0xc0000920c0)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000694010, 0xdf8475800, 0x0, 0xc000868c01, 0xc0000920c0)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000694010, 0xdf8475800, 0xc0000920c0)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	* created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24b
	* 
	* goroutine 193 [runnable]:
	* net/http.setRequestCancel.func4(0x0, 0xc000f16cc0, 0xc00049d540, 0xc000c34d6c, 0xc0001731a0)
	* 	/usr/local/go/src/net/http/client.go:398 +0xe5
	* created by net/http.setRequestCancel
	* 	/usr/local/go/src/net/http/client.go:397 +0x337
	* 
	* ==> kube-controller-manager [d1a3829d9f2e] <==
	* I0310 21:08:40.696444       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	* I0310 21:08:40.726082       1 shared_informer.go:247] Caches are synced for job 
	* I0310 21:08:40.740437       1 shared_informer.go:247] Caches are synced for service account 
	* I0310 21:08:40.744138       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	* I0310 21:08:40.760961       1 shared_informer.go:247] Caches are synced for namespace 
	* I0310 21:08:40.816364       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 21:08:40.816394       1 disruption.go:339] Sending events to api server.
	* I0310 21:08:40.822950       1 shared_informer.go:247] Caches are synced for ReplicationController 
	* I0310 21:08:40.823150       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	* I0310 21:08:40.823348       1 shared_informer.go:247] Caches are synced for daemon sets 
	* I0310 21:08:40.835129       1 range_allocator.go:373] Set node kubernetes-upgrade-20210310201637-6496 PodCIDR to [10.244.0.0/24]
	* I0310 21:08:40.835431       1 shared_informer.go:247] Caches are synced for deployment 
	* I0310 21:08:40.853057       1 shared_informer.go:247] Caches are synced for taint 
	* I0310 21:08:40.853495       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* W0310 21:08:40.853662       1 node_lifecycle_controller.go:1044] Missing timestamp for Node kubernetes-upgrade-20210310201637-6496. Assuming now as a timestamp.
	* I0310 21:08:40.862556       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:08:40.869106       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* I0310 21:08:40.882074       1 shared_informer.go:247] Caches are synced for stateful set 
	* I0310 21:08:40.882497       1 event.go:291] "Event occurred" object="kubernetes-upgrade-20210310201637-6496" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node kubernetes-upgrade-20210310201637-6496 event: Registered Node kubernetes-upgrade-20210310201637-6496 in Controller"
	* I0310 21:08:40.891623       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I0310 21:08:40.907445       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:08:42.805330       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 21:08:45.521132       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:08:45.581355       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:08:45.581408       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* 
	* ==> kube-scheduler [07eedf8f3362] <==
	* E0310 21:06:55.681712       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 21:06:55.682120       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:06:55.804875       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:06:55.888200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:06:56.103974       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 21:06:56.105096       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 21:06:57.063794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 21:06:57.749527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:06:57.815838       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 21:06:58.226697       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 21:06:59.451558       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 21:07:01.808733       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 21:07:04.414603       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 21:07:05.011674       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 21:07:05.156065       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 21:07:06.004171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:07:06.393991       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:07:07.158841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 21:07:07.260276       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 21:07:07.903032       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 21:07:08.377032       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:07:08.718075       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:07:10.706379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 21:07:19.501024       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I0310 21:08:05.926682       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:40:27 UTC, end at Wed 2021-03-10 21:14:52 UTC. --
	* Mar 10 21:07:13 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:07:13.512478    8410 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-upgrade-20210310201637-6496.166b16fdcf2454f4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-upgrade-20210310201637-6496", UID:"kubernetes-upgrade-20210310201637-6496", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kubernetes-upgrade-2021031
0201637-6496 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-upgrade-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc00a6adea9bd10f4, ext:12832759301, loc:(*time.Location)(0x70e90a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6ae1c5eeaa84, ext:25232030101, loc:(*time.Location)(0x70e90a0)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	* Mar 10 21:07:14 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:07:14.165284    8410 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-upgrade-20210310201637-6496.166b16fdcf24b570", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-upgrade-20210310201637-6496", UID:"kubernetes-upgrade-20210310201637-6496", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kubernetes-upgrade-202103102
01637-6496 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-upgrade-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc00a6adea9bd7170, ext:12832784001, loc:(*time.Location)(0x70e90a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6ae1c5eed70c, ext:25232041601, loc:(*time.Location)(0x70e90a0)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	* Mar 10 21:07:15 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:07:15.559260    8410 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-upgrade-20210310201637-6496.166b16fdcf24e25c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-upgrade-20210310201637-6496", UID:"kubernetes-upgrade-20210310201637-6496", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kubernetes-upgrade-2021031020
1637-6496 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-upgrade-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc00a6adea9bd9e5c, ext:12832795601, loc:(*time.Location)(0x70e90a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6ae1c5eefa98, ext:25232050701, loc:(*time.Location)(0x70e90a0)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	* Mar 10 21:07:16 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:07:16.353964    8410 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-upgrade-20210310201637-6496.166b16fdcf24e25c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-upgrade-20210310201637-6496", UID:"kubernetes-upgrade-20210310201637-6496", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kubernetes-upgrade-2021031020
1637-6496 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-upgrade-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc00a6adea9bd9e5c, ext:12832795601, loc:(*time.Location)(0x70e90a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6ae2cb5e41c8, ext:29323229401, loc:(*time.Location)(0x70e90a0)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	* Mar 10 21:07:17 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:07:17.075265    8410 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-upgrade-20210310201637-6496.166b16fdcf2454f4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-upgrade-20210310201637-6496", UID:"kubernetes-upgrade-20210310201637-6496", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kubernetes-upgrade-2021031
0201637-6496 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-upgrade-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc00a6adea9bd10f4, ext:12832759301, loc:(*time.Location)(0x70e90a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6ae2cb5df6c8, ext:29323210301, loc:(*time.Location)(0x70e90a0)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	* Mar 10 21:07:19 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:07:19.616012    8410 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-upgrade-20210310201637-6496.166b16fdcf24b570", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-upgrade-20210310201637-6496", UID:"kubernetes-upgrade-20210310201637-6496", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kubernetes-upgrade-202103102
01637-6496 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-upgrade-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc00a6adea9bd7170, ext:12832784001, loc:(*time.Location)(0x70e90a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6ae2cb5e2738, ext:29323222701, loc:(*time.Location)(0x70e90a0)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	* Mar 10 21:07:20 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:07:20.162309    8410 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-upgrade-20210310201637-6496.166b1701acb194a0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-upgrade-20210310201637-6496", UID:"kubernetes-upgrade-20210310201637-6496", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit
across pods", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-upgrade-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc00a6ae2d202e6a0, ext:29434682801, loc:(*time.Location)(0x70e90a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6ae2d202e6a0, ext:29434682801, loc:(*time.Location)(0x70e90a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	* Mar 10 21:07:21 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:07:21.886183    8410 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-upgrade-20210310201637-6496.166b16fdcf2454f4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-upgrade-20210310201637-6496", UID:"kubernetes-upgrade-20210310201637-6496", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kubernetes-upgrade-2021031
0201637-6496 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-upgrade-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc00a6adea9bd10f4, ext:12832759301, loc:(*time.Location)(0x70e90a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6ae447739504, ext:35260512101, loc:(*time.Location)(0x70e90a0)}}, Count:8, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	* Mar 10 21:07:24 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:07:24.649840    8410 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-upgrade-20210310201637-6496.166b16fdcf24b570", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-upgrade-20210310201637-6496", UID:"kubernetes-upgrade-20210310201637-6496", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kubernetes-upgrade-202103102
01637-6496 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kubernetes-upgrade-20210310201637-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc00a6adea9bd7170, ext:12832784001, loc:(*time.Location)(0x70e90a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6ae44773c7cc, ext:35260525101, loc:(*time.Location)(0x70e90a0)}}, Count:8, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	* Mar 10 21:07:28 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: I0310 21:07:28.786999    8410 scope.go:111] [topologymanager] RemoveContainer - Container ID: 7fe915da4c345ba89c3a889facf893c307a452c3f493e058b920747d4addf10c
	* Mar 10 21:07:35 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: I0310 21:07:35.489527    8410 reconciler.go:157] Reconciler: start to sync state
	* Mar 10 21:07:44 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: W0310 21:07:44.192103    8410 docker_container.go:245] Deleted previously existing symlink file: "/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-20210310201637-6496_3b70bec3c1c11ef2a7b66cc88a44cda9/kube-controller-manager/1.log"
	* Mar 10 21:07:58 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: W0310 21:07:58.461098    8410 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 21:07:58 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: W0310 21:07:58.545786    8410 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 21:08:42 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: I0310 21:08:42.164585    8410 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
	* Mar 10 21:08:43 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: I0310 21:08:43.412478    8410 docker_service.go:358] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
	* Mar 10 21:08:43 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: I0310 21:08:43.513372    8410 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	* Mar 10 21:12:52 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: I0310 21:12:52.409287    8410 trace.go:205] Trace[951538344]: "iptables Monitor CANARY check" (10-Mar-2021 21:12:50.321) (total time: 2087ms):
	* Mar 10 21:12:52 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: Trace[951538344]: [2.0878935s] [2.0878935s] END
	* Mar 10 21:13:02 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:13:02.430549    8410 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-20210310201637-6496?timeout=10s": context deadline exceeded
	* Mar 10 21:13:03 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: W0310 21:13:03.749419    8410 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 21:13:05 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: E0310 21:13:05.282949    8410 controller.go:187] failed to update lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "kubernetes-upgrade-20210310201637-6496": the object has been modified; please apply your changes to the latest version and try again
	* Mar 10 21:13:06 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: W0310 21:13:06.324431    8410 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 21:13:54 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: I0310 21:13:54.505433    8410 trace.go:205] Trace[538812697]: "iptables Monitor CANARY check" (10-Mar-2021 21:13:50.429) (total time: 4076ms):
	* Mar 10 21:13:54 kubernetes-upgrade-20210310201637-6496 kubelet[8410]: Trace[538812697]: [4.0760936s] [4.0760936s] END
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                    |                  Profile                  |          User           | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                        | insufficient-storage-20210310201557-6496  | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:29 GMT | Wed, 10 Mar 2021 20:16:37 GMT |
	|         | insufficient-storage-20210310201557-6496  |                                           |                         |         |                               |                               |
	| delete  | -p pause-20210310201637-6496              | pause-20210310201637-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:24 GMT | Wed, 10 Mar 2021 20:32:49 GMT |
	| -p      | offline-docker-20210310201637-6496        | offline-docker-20210310201637-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:04 GMT | Wed, 10 Mar 2021 20:33:57 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p                                        | offline-docker-20210310201637-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:34:20 GMT | Wed, 10 Mar 2021 20:34:47 GMT |
	|         | offline-docker-20210310201637-6496        |                                           |                         |         |                               |                               |
	| stop    | -p                                        | kubernetes-upgrade-20210310201637-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:39:52 GMT | Wed, 10 Mar 2021 20:40:10 GMT |
	|         | kubernetes-upgrade-20210310201637-6496    |                                           |                         |         |                               |                               |
	| start   | -p nospam-20210310201637-6496             | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:38 GMT | Wed, 10 Mar 2021 20:40:39 GMT |
	|         | -n=1 --memory=2250                        |                                           |                         |         |                               |                               |
	|         | --wait=false --driver=docker              |                                           |                         |         |                               |                               |
	| -p      | nospam-20210310201637-6496                | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:41:42 GMT | Wed, 10 Mar 2021 20:44:25 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p nospam-20210310201637-6496             | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:44:37 GMT | Wed, 10 Mar 2021 20:44:59 GMT |
	| -p      | docker-flags-20210310201637-6496          | docker-flags-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:47:18 GMT | Wed, 10 Mar 2021 20:49:03 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p                                        | docker-flags-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:21 GMT | Wed, 10 Mar 2021 20:49:47 GMT |
	|         | docker-flags-20210310201637-6496          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | force-systemd-env-20210310201637-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:41 GMT | Wed, 10 Mar 2021 20:50:17 GMT |
	|         | force-systemd-env-20210310201637-6496     |                                           |                         |         |                               |                               |
	| -p      | cert-options-20210310203249-6496          | cert-options-20210310203249-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:50:36 GMT | Wed, 10 Mar 2021 20:50:43 GMT |
	|         | ssh openssl x509 -text -noout -in         |                                           |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt     |                                           |                         |         |                               |                               |
	| delete  | -p                                        | cert-options-20210310203249-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:10 GMT | Wed, 10 Mar 2021 20:51:56 GMT |
	|         | cert-options-20210310203249-6496          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | disable-driver-mounts-20210310205156-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:57 GMT | Wed, 10 Mar 2021 20:52:02 GMT |
	|         | disable-driver-mounts-20210310205156-6496 |                                           |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210310203447-6496    | force-systemd-flag-20210310203447-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:53:03 GMT | Wed, 10 Mar 2021 20:53:44 GMT |
	|         | ssh docker info --format                  |                                           |                         |         |                               |                               |
	|         |                          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | force-systemd-flag-20210310203447-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:54:07 GMT | Wed, 10 Mar 2021 20:54:36 GMT |
	|         | force-systemd-flag-20210310203447-6496    |                                           |                         |         |                               |                               |
	| stop    | -p                                        | old-k8s-version-20210310204459-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:19 GMT | Wed, 10 Mar 2021 21:02:40 GMT |
	|         | old-k8s-version-20210310204459-6496       |                                           |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                    |                                           |                         |         |                               |                               |
	| addons  | enable dashboard -p                       | old-k8s-version-20210310204459-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:42 GMT | Wed, 10 Mar 2021 21:02:42 GMT |
	|         | old-k8s-version-20210310204459-6496       |                                           |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496           | embed-certs-20210310205017-6496           | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:07:05 GMT | Wed, 10 Mar 2021 21:08:33 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| start   | -p                                        | stopped-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:52:21 GMT | Wed, 10 Mar 2021 21:09:23 GMT |
	|         | stopped-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr           |                                           |                         |         |                               |                               |
	|         | -v=1 --driver=docker                      |                                           |                         |         |                               |                               |
	| logs    | -p                                        | stopped-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:09:23 GMT | Wed, 10 Mar 2021 21:10:51 GMT |
	|         | stopped-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	| delete  | -p                                        | stopped-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:10:52 GMT | Wed, 10 Mar 2021 21:11:13 GMT |
	|         | stopped-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	| delete  | -p                                        | running-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:45 GMT | Wed, 10 Mar 2021 21:12:11 GMT |
	|         | running-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	| stop    | -p                                        | embed-certs-20210310205017-6496           | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:12:38 GMT |
	|         | embed-certs-20210310205017-6496           |                                           |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                    |                                           |                         |         |                               |                               |
	| addons  | enable dashboard -p                       | embed-certs-20210310205017-6496           | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:40 GMT | Wed, 10 Mar 2021 21:12:41 GMT |
	|         | embed-certs-20210310205017-6496           |                                           |                         |         |                               |                               |
	|---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 21:12:41
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 21:12:41.864466   18444 out.go:239] Setting OutFile to fd 2560 ...
	* I0310 21:12:41.865478   18444 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:12:41.865478   18444 out.go:252] Setting ErrFile to fd 1780...
	* I0310 21:12:41.865478   18444 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:12:41.876384   18444 out.go:246] Setting JSON to false
	* I0310 21:12:41.878392   18444 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36227,"bootTime":1615374534,"procs":118,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 21:12:41.879390   18444 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 21:12:41.883412   18444 out.go:129] * [embed-certs-20210310205017-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 21:12:38.444306   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:39.676364   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:40.812667   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:42.057695   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:41.886411   18444 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 21:12:41.897821   18444 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 21:12:42.439380   18444 docker.go:119] docker version: linux-20.10.2
	* I0310 21:12:42.446315   18444 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:12:43.723542   18444 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.277228s)
	* I0310 21:12:43.724888   18444 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:12:43.1218639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:12:43.728250   18444 out.go:129] * Using the docker driver based on existing profile
	* I0310 21:12:43.729044   18444 start.go:276] selected driver: docker
	* I0310 21:12:43.729044   18444 start.go:718] validating driver "docker" against &{Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:12:43.729311   18444 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 21:12:44.851133   18444 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:12:45.911277   18444 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0601454s)
	* I0310 21:12:45.912501   18444 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:12:45.4750355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:12:45.913355   18444 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 21:12:45.913484   18444 start_flags.go:398] config:
	* {Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:12:45.918095   18444 out.go:129] * Starting control plane node embed-certs-20210310205017-6496 in cluster embed-certs-20210310205017-6496
	* I0310 21:12:42.551198   22316 kic_runner.go:124] Done: [docker exec --privileged false-20210310211211-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.9829709s)
	* I0310 21:12:42.563519   22316 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa...
	* I0310 21:12:43.412980   22316 cli_runner.go:115] Run: docker container inspect false-20210310211211-6496 --format=
	* I0310 21:12:44.102425   22316 machine.go:88] provisioning docker machine ...
	* I0310 21:12:44.102912   22316 ubuntu.go:169] provisioning hostname "false-20210310211211-6496"
	* I0310 21:12:44.115201   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:12:44.767257   22316 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:12:44.768085   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	* I0310 21:12:44.768232   22316 main.go:121] libmachine: About to run SSH command:
	* sudo hostname false-20210310211211-6496 && echo "false-20210310211211-6496" | sudo tee /etc/hostname
	* I0310 21:12:44.780818   22316 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:12:46.600599   18444 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 21:12:46.600599   18444 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 21:12:46.601017   18444 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:12:46.601421   18444 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:12:46.601421   18444 cache.go:54] Caching tarball of preloaded images
	* I0310 21:12:46.601731   18444 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 21:12:46.601731   18444 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 21:12:46.602007   18444 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\config.json ...
	* I0310 21:12:46.616567   18444 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 21:12:46.622732   18444 start.go:313] acquiring machines lock for embed-certs-20210310205017-6496: {Name:mk5deb5478a17b664131b4c3205eef748b11179e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:12:46.624001   18444 start.go:317] acquired machines lock for "embed-certs-20210310205017-6496" in 300.2??s
	* I0310 21:12:46.624373   18444 start.go:93] Skipping create...Using existing machine configuration
	* I0310 21:12:46.624586   18444 fix.go:55] fixHost starting: 
	* I0310 21:12:46.639912   18444 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format=
	* I0310 21:12:47.310427   18444 fix.go:108] recreateIfNeeded on embed-certs-20210310205017-6496: state=Stopped err=<nil>
	* W0310 21:12:47.310427   18444 fix.go:134] unexpected machine state, will restart: <nil>
	* I0310 21:12:43.470722   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:44.606164   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:45.734123   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:46.804581   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:47.314638   18444 out.go:129] * Restarting existing docker container for "embed-certs-20210310205017-6496" ...
	* I0310 21:12:47.319764   18444 cli_runner.go:115] Run: docker start embed-certs-20210310205017-6496
	* I0310 21:12:50.315712   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: false-20210310211211-6496
	* 
	* I0310 21:12:50.336532   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:12:51.020432   22316 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:12:51.030067   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	* I0310 21:12:51.030067   22316 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\sfalse-20210310211211-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20210310211211-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 false-20210310211211-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:12:47.356289   18752 ssh_runner.go:189] Completed: docker images --format :: (25.8237496s)
	* I0310 21:12:47.357507   18752 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	* k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	* k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	* k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 21:12:47.357507   18752 docker.go:429] minikube-local-cache-test:functional-20210119220838-6552 wasn't preloaded
	* I0310 21:12:47.357507   18752 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210119220838-6552 minikube-local-cache-test:functional-20210128021318-232 minikube-local-cache-test:functional-20210304184021-4052 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210310191609-6496 minikube-local-cache-test:functional-20210106002159-6856 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210303214129-4588 minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210213143925-7440 minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functio
nal-20210126212539-5172 minikube-local-cache-test:functional-20210225231842-5736 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210114204234-6692 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210304002630-1156 minikube-local-cache-test:functional-20210308233820-5396 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210107190945-8748]
	* I0310 21:12:47.560647   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210304002630-1156
	* I0310 21:12:47.573296   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210309234032-4944
	* I0310 21:12:47.591639   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120175851-7432
	* I0310 21:12:47.593746   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210304184021-4052
	* I0310 21:12:47.623345   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	* I0310 21:12:47.639327   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210308233820-5396
	* I0310 21:12:47.639327   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210303214129-4588
	* I0310 21:12:47.657974   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210119220838-6552
	* I0310 21:12:47.674059   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210306072141-12056
	* I0310 21:12:47.697841   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210213143925-7440
	* I0310 21:12:47.728111   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210220004129-7452
	* I0310 21:12:47.747321   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	* I0310 21:12:47.784534   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210224014800-800
	* I0310 21:12:47.805452   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	* I0310 21:12:47.848672   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	* I0310 21:12:47.868695   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120214442-10992
	* I0310 21:12:47.870760   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210126212539-5172
	* I0310 21:12:47.902259   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210115191024-3516
	* I0310 21:12:47.906814   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	* I0310 21:12:47.923463   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210123004019-5372
	* I0310 21:12:47.928202   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120022529-1140
	* I0310 21:12:47.956178   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	* I0310 21:12:47.963166   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	* I0310 21:12:47.998199   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	* I0310 21:12:48.064967   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210219145454-9520
	* I0310 21:12:48.074644   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210225231842-5736
	* I0310 21:12:48.088452   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	* W0310 21:12:48.092246   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:12:48.108649   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	* I0310 21:12:48.112658   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120231122-7024
	* I0310 21:12:48.135760   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210310083645-5040
	* I0310 21:12:48.151248   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210114204234-6692
	* W0310 21:12:48.174652   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:12:48.195793   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210128021318-232
	* I0310 21:12:48.226996   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210212145109-352
	* I0310 21:12:48.245445   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210310191609-6496
	* I0310 21:12:48.257949   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210115023213-8464
	* I0310 21:12:48.267306   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	* W0310 21:12:48.277733   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:12:48.351732   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	* I0310 21:12:48.370641   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210219220622-3920
	* I0310 21:12:48.388773   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	* I0310 21:12:48.388773   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	* W0310 21:12:48.443070   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:12:48.451259   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.451715   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	* I0310 21:12:48.451913   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 21:12:48.451913   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 21:12:48.465810   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.465810   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	* I0310 21:12:48.465810   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 21:12:48.465810   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 21:12:48.473883   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210301195830-5700
	* I0310 21:12:48.480057   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	* I0310 21:12:48.480057   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	* I0310 21:12:48.491637   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.492632   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	* I0310 21:12:48.492632   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:12:48.492632   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:12:48.519108   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* W0310 21:12:48.538349   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* W0310 21:12:48.555402   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* W0310 21:12:48.580997   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:12:48.610702   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.610702   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	* I0310 21:12:48.611057   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:12:48.611338   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:12:48.631430   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:12:48.660144   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.660348   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	* I0310 21:12:48.660668   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:12:48.660668   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:12:48.672287   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:12:48.681108   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.682325   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	* I0310 21:12:48.682769   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 21:12:48.682769   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 21:12:48.689153   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.689153   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	* I0310 21:12:48.689574   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:12:48.689574   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:12:48.701837   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:12:48.702845   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* W0310 21:12:51.262622   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.262622   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.262622   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: "minikube-local-cache-test:functional-20210120214442-10992" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.262622   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* I0310 21:12:51.263118   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.263118   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: "minikube-local-cache-test:functional-20210123004019-5372" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.263118   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: "minikube-local-cache-test:functional-20210120231122-7024" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.263118   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:12:51.263118   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* I0310 21:12:51.263118   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:12:51.263118   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* W0310 21:12:51.263344   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.263344   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: "minikube-local-cache-test:functional-20210224014800-800" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.263344   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* I0310 21:12:51.263608   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* W0310 21:12:51.263608   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.263773   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: NewSession: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.264568   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: "minikube-local-cache-test:functional-20210126212539-5172" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.262622   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.264031   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.264031   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.263118   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: "minikube-local-cache-test:functional-20210115023213-8464" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.264031   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.264395   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	* I0310 21:12:51.265396   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: "minikube-local-cache-test:functional-20210219220622-3920" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265396   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 21:12:51.265396   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 21:12:51.265396   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	* I0310 21:12:51.265689   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: "minikube-local-cache-test:functional-20210301195830-5700" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265800   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:12:51.265800   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:12:51.265800   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:12:51.266048   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:12:51.265396   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.265103   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:12:51.266741   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	* I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: "minikube-local-cache-test:functional-20210212145109-352" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: "minikube-local-cache-test:functional-20210128021318-232" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: "minikube-local-cache-test:functional-20210310191609-6496" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: "minikube-local-cache-test:functional-20210225231842-5736" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: "minikube-local-cache-test:functional-20210120022529-1140" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: "minikube-local-cache-test:functional-20210114204234-6692" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: "minikube-local-cache-test:functional-20210115191024-3516" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: "minikube-local-cache-test:functional-20210310083645-5040" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265396   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.265103   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.267060   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:12:51.267060   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	* I0310 21:12:51.267060   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.267060   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 21:12:51.278344   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 21:12:51.267060   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 21:12:51.279628   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: "minikube-local-cache-test:functional-20210219145454-9520" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.279941   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 21:12:51.279941   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 21:12:51.267060   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 21:12:51.280346   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 21:12:51.280554   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	* I0310 21:12:51.282162   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 21:12:51.282162   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 21:12:51.266741   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.282481   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 21:12:51.282481   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 21:12:51.282680   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	* I0310 21:12:51.282680   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	* I0310 21:12:51.283252   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 21:12:51.282162   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 21:12:51.284541   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 21:12:51.282481   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:12:51.284736   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:12:51.282481   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 21:12:51.285901   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 21:12:51.469515   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	* I0310 21:12:51.484589   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:12:51.533483   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	* I0310 21:12:51.629820   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	* I0310 21:12:51.706342   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	* I0310 21:12:51.707710   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.708397   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.708896   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.732967   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.733363   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.733561   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.743681   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:12:51.745735   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.753348   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:12:51.755571   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	* I0310 21:12:51.756103   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	* I0310 21:12:51.762547   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:12:51.775210   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	* I0310 21:12:51.775919   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	* I0310 21:12:51.800100   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:12:51.810091   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	* I0310 21:12:51.813706   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	* I0310 21:12:51.813706   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.813706   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	* I0310 21:12:51.813706   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	* I0310 21:12:51.820771   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.827809   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.860588   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.868441   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.886285   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.908114   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.919653   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.945930   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.949437   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.952434   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.948834   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.953370   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:48.030376   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:49.199707   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:51.280979   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:54.762225   18444 cli_runner.go:168] Completed: docker start embed-certs-20210310205017-6496: (7.4424719s)
	* I0310 21:12:54.773282   18444 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format=
	* I0310 21:12:55.405890   18444 kic.go:410] container "embed-certs-20210310205017-6496" state is running.
	* I0310 21:12:55.438914   18444 cli_runner.go:115] Run: docker container inspect -f "" embed-certs-20210310205017-6496
	* I0310 21:12:56.104627   18444 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\config.json ...
	* I0310 21:12:56.120914   18444 machine.go:88] provisioning docker machine ...
	* I0310 21:12:56.121045   18444 ubuntu.go:169] provisioning hostname "embed-certs-20210310205017-6496"
	* I0310 21:12:56.131918   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:12:53.779092   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:12:53.780765   22316 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:12:53.780765   22316 ubuntu.go:177] setting up certificates
	* I0310 21:12:53.781192   22316 provision.go:83] configureAuth start
	* I0310 21:12:53.805343   22316 cli_runner.go:115] Run: docker container inspect -f "" false-20210310211211-6496
	* I0310 21:12:54.440272   22316 provision.go:137] copyHostCerts
	* I0310 21:12:54.440272   22316 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:12:54.440798   22316 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:12:54.441178   22316 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:12:54.445616   22316 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:12:54.445616   22316 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:12:54.446282   22316 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:12:54.449747   22316 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:12:54.449987   22316 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:12:54.450644   22316 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:12:54.455779   22316 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.false-20210310211211-6496 san=[172.17.0.8 127.0.0.1 localhost 127.0.0.1 minikube false-20210310211211-6496]
	* I0310 21:12:54.748978   22316 provision.go:165] copyRemoteCerts
	* I0310 21:12:54.766034   22316 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:12:54.774187   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:12:55.399153   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	* I0310 21:12:56.240137   22316 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4741049s)
	* I0310 21:12:56.240137   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:12:56.735057   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	* I0310 21:12:53.395822   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.662745s)
	* I0310 21:12:53.396719   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.485017   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6713142s)
	* I0310 21:12:53.485539   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.496519   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7629602s)
	* I0310 21:12:53.496793   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.512111   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6511935s)
	* I0310 21:12:53.512111   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.541713   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7209444s)
	* I0310 21:12:53.541952   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.590524   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8569656s)
	* I0310 21:12:53.590700   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.621272   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7934655s)
	* I0310 21:12:53.621687   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.642607   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.664234s)
	* I0310 21:12:53.642607   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.668750   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8003122s)
	* I0310 21:12:53.668963   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.688746   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.9810395s)
	* I0310 21:12:53.689097   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.758147   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8047789s)
	* I0310 21:12:53.758745   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.759520   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8398693s)
	* I0310 21:12:53.760251   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.770222   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8401715s)
	* I0310 21:12:53.770667   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.785284   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8990014s)
	* I0310 21:12:53.785776   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.817210   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8868726s)
	* I0310 21:12:53.817447   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.818574   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0728419s)
	* I0310 21:12:53.818919   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.843754   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.13536s)
	* I0310 21:12:53.844110   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.883965   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1750717s)
	* I0310 21:12:53.884398   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.958209   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0084306s)
	* I0310 21:12:53.959831   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0089094s)
	* I0310 21:12:53.959831   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.960273   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.966133   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0137022s)
	* I0310 21:12:53.968446   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.994078   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0407109s)
	* I0310 21:12:53.994078   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0316693s)
	* I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1018557s)
	* I0310 21:12:54.010638   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:54.010638   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.272827   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:56.598226   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:56.741795   18444 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:12:56.743328   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	* I0310 21:12:56.743564   18444 main.go:121] libmachine: About to run SSH command:
	* sudo hostname embed-certs-20210310205017-6496 && echo "embed-certs-20210310205017-6496" | sudo tee /etc/hostname
	* I0310 21:12:56.757510   18444 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:12:59.783456   18444 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:12:57.353881   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	* I0310 21:12:58.233074   22316 provision.go:86] duration metric: configureAuth took 4.4518884s
	* I0310 21:12:58.233074   22316 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:12:58.267262   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:12:59.006727   22316 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:12:59.007814   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	* I0310 21:12:59.007814   22316 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:13:00.406956   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:13:00.406956   22316 ubuntu.go:71] root file system type: overlay
	* I0310 21:13:00.407166   22316 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:13:00.416655   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:01.049891   22316 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:01.050764   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	* I0310 21:13:01.050764   22316 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:12:57.361413   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361612   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 21:12:57.361612   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361808   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361808   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 21:12:57.361612   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:12:57.361808   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:12:57.361808   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361808   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 21:12:57.361808   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 21:12:57.361808   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.362006   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:12:57.362006   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:12:57.361612   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:12:57.362270   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:12:57.361808   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:12:57.362625   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.363064   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:12:57.363064   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:12:57.361413   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.363337   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361808   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 21:12:57.364314   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 21:12:57.364462   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 21:12:57.367708   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 21:12:57.466731   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:12:57.476748   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.519791   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	* I0310 21:12:57.540705   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.552653   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:12:57.560773   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:12:57.562999   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.563922   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:12:57.566901   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	* I0310 21:12:57.579023   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.579023   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:12:57.590099   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.591785   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	* I0310 21:12:57.592789   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 21:12:57.596786   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 21:12:57.597202   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.601772   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.608041   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.617147   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.625602   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:58.454521   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.501263   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0245158s)
	* I0310 21:12:58.501555   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.513022   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.563355   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0226521s)
	* I0310 21:12:58.563355   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.637870   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0294542s)
	* I0310 21:12:58.638141   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.653923   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0281021s)
	* I0310 21:12:58.653923   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.679155   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0890575s)
	* I0310 21:12:58.679155   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.686201   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0690557s)
	* I0310 21:12:58.687282   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1513309s)
	* I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1559017s)
	* I0310 21:12:58.753709   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.753709   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:57.839214   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:59.277594   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:00.804139   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:01.868486   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:02.798933   18444 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:13:02.963365   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:13:02.971839   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:03.615956   22316 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:03.617620   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	* I0310 21:13:03.617911   22316 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* W0310 21:13:04.744994   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:04.744994   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:04.745224   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	* I0310 21:13:04.758785   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* W0310 21:13:05.269849   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:05.269849   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:05.269849   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	* I0310 21:13:05.285833   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:05.377486   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:05.926683   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:03.762450   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:05.609769   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:07.699476   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:07.819194   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: embed-certs-20210310205017-6496
	* 
	* I0310 21:13:07.838772   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:08.438721   18444 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:08.439026   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	* I0310 21:13:08.439026   18444 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\sembed-certs-20210310205017-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210310205017-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 embed-certs-20210310205017-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:13:09.846781   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:13:09.847029   18444 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:13:09.847029   18444 ubuntu.go:177] setting up certificates
	* I0310 21:13:09.847029   18444 provision.go:83] configureAuth start
	* I0310 21:13:09.857215   18444 cli_runner.go:115] Run: docker container inspect -f "" embed-certs-20210310205017-6496
	* I0310 21:13:10.524620   18444 provision.go:137] copyHostCerts
	* I0310 21:13:10.525147   18444 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:13:10.525431   18444 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:13:10.525817   18444 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:13:10.531398   18444 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:13:10.531398   18444 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:13:10.531945   18444 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:13:10.538062   18444 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:13:10.538062   18444 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:13:10.539306   18444 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:13:10.542016   18444 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.embed-certs-20210310205017-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210310205017-6496]
	* I0310 21:13:10.734235   18444 provision.go:165] copyRemoteCerts
	* I0310 21:13:10.742139   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:13:10.751385   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:11.337830   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	* W0310 21:13:10.795233   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:09.215487   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:10.222082   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:11.664728   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:12.461526   18444 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.7193887s)
	* I0310 21:13:12.462290   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:13:13.480520   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	* I0310 21:13:14.824810   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 21:13:16.109429   18444 provision.go:86] duration metric: configureAuth took 6.2624095s
	* I0310 21:13:16.109429   18444 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:13:16.120584   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* W0310 21:13:15.085213   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:15.085485   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:15.085915   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	* I0310 21:13:15.101929   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:15.738383   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:14.018661   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:15.188555   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:16.713288   18444 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:16.714294   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	* I0310 21:13:16.714294   18444 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:13:18.734749   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:13:18.734749   18444 ubuntu.go:71] root file system type: overlay
	* I0310 21:13:18.735219   18444 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:13:18.742632   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:19.364241   18444 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:19.364241   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	* I0310 21:13:19.365793   18444 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:13:21.300423   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:13:21.308894   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* W0310 21:13:18.491146   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:18.491490   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:18.493152   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	* I0310 21:13:18.512811   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:19.150277   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* W0310 21:13:19.460446   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:19.460446   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:19.461481   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	* I0310 21:13:19.466948   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:20.136532   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:19.310940   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:20.542067   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:21.744034   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:21.934752   18444 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:21.935488   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	* I0310 21:13:21.935808   18444 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 21:13:24.022228   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:13:24.022499   18444 machine.go:91] provisioned docker machine in 27.9014928s
	* I0310 21:13:24.022499   18444 start.go:267] post-start starting for "embed-certs-20210310205017-6496" (driver="docker")
	* I0310 21:13:24.022499   18444 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:13:24.025597   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:13:24.039903   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:24.692780   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	* I0310 21:13:25.877491   18444 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8518959s)
	* I0310 21:13:25.894216   18444 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:13:26.002177   18444 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:13:26.002579   18444 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:13:26.002579   18444 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:13:26.002579   18444 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:13:26.002861   18444 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:13:26.003528   18444 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:13:26.007088   18444 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:13:26.007611   18444 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:13:26.020549   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:13:26.333222   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:13:25.623756   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 21:13:02.944722000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 21:13:25.624097   22316 machine.go:91] provisioned docker machine in 41.5217304s
	* I0310 21:13:25.624097   22316 client.go:171] LocalClient.Create took 1m8.7849166s
	* I0310 21:13:25.624097   22316 start.go:168] duration metric: libmachine.API.Create for "false-20210310211211-6496" took 1m8.7853333s
	* I0310 21:13:25.624097   22316 start.go:267] post-start starting for "false-20210310211211-6496" (driver="docker")
	* I0310 21:13:25.624097   22316 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:13:25.634099   22316 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:13:25.645133   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:26.319049   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	* W0310 21:13:22.320407   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:22.320807   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:22.321052   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	* I0310 21:13:22.321052   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* W0310 21:13:22.804709   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:22.804709   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:22.804709   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056 (4096 bytes)
	* I0310 21:13:22.814336   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:23.012763   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:23.412759   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* W0310 21:13:25.084062   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:25.084219   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:25.084994   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	* I0310 21:13:25.102916   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* W0310 21:13:25.205188   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:25.205188   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:25.205631   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	* W0310 21:13:25.207930   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:25.207930   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:25.207930   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	* I0310 21:13:25.214321   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:25.217320   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:25.840746   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:25.953687   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:25.974626   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* W0310 21:13:26.192805   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* W0310 21:13:26.864421   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:27.427348   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:27.063958   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:13:27.872391   18444 start.go:270] post-start completed in 3.8495904s
	* I0310 21:13:27.886786   18444 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:13:27.894704   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:28.514187   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	* I0310 21:13:29.370055   18444 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4830372s)
	* I0310 21:13:29.370219   18444 fix.go:57] fixHost completed within 42.7459052s
	* I0310 21:13:29.370219   18444 start.go:80] releasing machines lock for "embed-certs-20210310205017-6496", held for 42.7462779s
	* I0310 21:13:29.388654   18444 cli_runner.go:115] Run: docker container inspect -f "" embed-certs-20210310205017-6496
	* I0310 21:13:30.092636   18444 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:13:30.096865   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:30.096865   18444 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:13:30.111612   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:30.809856   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	* I0310 21:13:30.868547   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	* I0310 21:13:26.965777   22316 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3316798s)
	* I0310 21:13:26.979571   22316 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:13:27.041034   22316 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:13:27.041595   22316 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:13:27.041595   22316 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:13:27.041595   22316 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:13:27.041595   22316 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:13:27.042315   22316 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:13:27.044801   22316 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:13:27.045903   22316 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:13:27.058807   22316 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:13:27.282537   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:13:27.708690   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:13:28.446877   22316 start.go:270] post-start completed in 2.8227839s
	* I0310 21:13:28.512700   22316 cli_runner.go:115] Run: docker container inspect -f "" false-20210310211211-6496
	* I0310 21:13:29.190137   22316 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\config.json ...
	* I0310 21:13:29.242845   22316 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:13:29.248980   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:29.940534   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	* I0310 21:13:30.418039   22316 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1751962s)
	* I0310 21:13:30.418512   22316 start.go:129] duration metric: createHost completed in 1m13.5837487s
	* I0310 21:13:30.418512   22316 start.go:80] releasing machines lock for "false-20210310211211-6496", held for 1m13.5848074s
	* I0310 21:13:30.429055   22316 cli_runner.go:115] Run: docker container inspect -f "" false-20210310211211-6496
	* I0310 21:13:31.056336   22316 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:13:31.069740   22316 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:13:31.070658   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:31.078759   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:31.725436   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	* I0310 21:13:31.782793   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	* W0310 21:13:30.436709   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* W0310 21:13:30.612292   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:32.379163   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:33.006142   18444 ssh_runner.go:189] Completed: systemctl --version: (2.9092807s)
	* I0310 21:13:33.006142   18444 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.9135095s)
	* I0310 21:13:33.024806   18444 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:13:33.414467   18444 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:13:33.678401   18444 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:13:33.690511   18444 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:13:33.986977   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:13:34.796782   18444 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:13:35.098284   18444 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:13:32.747305   22316 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6907746s)
	* I0310 21:13:32.752679   22316 ssh_runner.go:189] Completed: systemctl --version: (1.6775676s)
	* I0310 21:13:32.761954   22316 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:13:33.006142   22316 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:13:33.313909   22316 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:13:33.324754   22316 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:13:33.500636   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:13:33.933231   22316 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:13:34.261595   22316 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:13:36.141117   22316 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.879256s)
	* I0310 21:13:36.146867   22316 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:13:36.334513   22316 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:13:37.715953   22316 ssh_runner.go:189] Completed: docker version --format : (1.3814424s)
	* I0310 21:13:33.678057   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:36.811079   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:38.914658   18444 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (3.8157861s)
	* I0310 21:13:38.923160   18444 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:13:37.720657   22316 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 21:13:37.728327   22316 cli_runner.go:115] Run: docker exec -t false-20210310211211-6496 dig +short host.docker.internal
	* I0310 21:13:39.192283   22316 cli_runner.go:168] Completed: docker exec -t false-20210310211211-6496 dig +short host.docker.internal: (1.4637323s)
	* I0310 21:13:39.192283   22316 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:13:39.207998   22316 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:13:39.285432   22316 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:13:39.488453   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:40.124884   22316 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\client.crt
	* I0310 21:13:40.135955   22316 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\client.key
	* I0310 21:13:40.135955   22316 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:13:40.135955   22316 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:13:40.149276   22316 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:13:40.871022   22316 docker.go:423] Got preloaded images: 
	* I0310 21:13:40.871022   22316 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	* I0310 21:13:40.887557   22316 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:13:41.059625   22316 ssh_runner.go:149] Run: which lz4
	* I0310 21:13:41.168560   22316 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 21:13:41.235262   22316 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 21:13:41.235667   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	* I0310 21:13:42.113288   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:42.865353   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m16.9669011s)
	* I0310 21:13:42.868563   19328 logs.go:122] Gathering logs for kube-apiserver [ba5aace99e81] ...
	* I0310 21:13:42.868563   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 ba5aace99e81"
	* I0310 21:13:43.191321   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:44.682198   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:45.813059   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:47.104739   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:48.481166   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:50.478535   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:52.097972   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:53.167550   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:54.660908   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:56.043628   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:57.390515   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:55.365896   12868 docker.go:388] Took 86.535985 seconds to copy over tarball
	* I0310 21:13:55.386715   12868 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	* I0310 21:13:58.421032   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:00.744278   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:01.907357   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:04.360578   18444 ssh_runner.go:189] Completed: sudo systemctl start docker: (25.4374535s)
	* I0310 21:14:04.377902   18444 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:14:06.781861   18444 ssh_runner.go:189] Completed: docker version --format : (2.4039621s)
	* I0310 21:14:02.672034   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 ba5aace99e81": (19.8034991s)
	* I0310 21:14:02.704779   19328 logs.go:122] Gathering logs for container status ...
	* I0310 21:14:02.704779   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 21:14:03.282501   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:04.519470   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:06.185863   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:07.609393   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:06.792543   18444 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 21:14:06.800238   18444 cli_runner.go:115] Run: docker exec -t embed-certs-20210310205017-6496 dig +short host.docker.internal
	* I0310 21:14:08.636285   18444 cli_runner.go:168] Completed: docker exec -t embed-certs-20210310205017-6496 dig +short host.docker.internal: (1.8358704s)
	* I0310 21:14:08.636451   18444 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:14:08.655564   18444 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:14:08.698946   18444 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:14:08.879534   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:14:09.472845   18444 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:14:09.473124   18444 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:14:09.480641   18444 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:14:11.047506   18444 ssh_runner.go:189] Completed: docker images --format :: (1.5668664s)
	* I0310 21:14:11.047979   18444 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* minikube-local-cache-test:functional-20210120214442-10992
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* busybox:1.28.4-glibc
	* 
	* -- /stdout --
	* I0310 21:14:11.047979   18444 docker.go:360] Images already preloaded, skipping extraction
	* I0310 21:14:11.054361   18444 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:14:08.728659   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:10.605359   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:11.775469   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:12.426908   18444 ssh_runner.go:189] Completed: docker images --format :: (1.3725485s)
	* I0310 21:14:12.426908   18444 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* minikube-local-cache-test:functional-20210120214442-10992
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* busybox:1.28.4-glibc
	* 
	* -- /stdout --
	* I0310 21:14:12.427322   18444 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 21:14:12.443022   18444 ssh_runner.go:149] Run: docker info --format 
	* I0310 21:14:15.489095   18444 ssh_runner.go:189] Completed: docker info --format : (3.0460767s)
	* I0310 21:14:15.489563   18444 cni.go:74] Creating CNI manager for ""
	* I0310 21:14:15.489563   18444 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 21:14:15.489563   18444 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 21:14:15.489563   18444 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.97 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210310205017-6496 NodeName:embed-certs-20210310205017-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.97 CgroupDriver:cgroupfs ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 21:14:15.490025   18444 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 192.168.49.97
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "embed-certs-20210310205017-6496"
	*   kubeletExtraArgs:
	*     node-ip: 192.168.49.97
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "192.168.49.97"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 21:14:15.490025   18444 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20210310205017-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.97
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	* I0310 21:14:15.500645   18444 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 21:14:15.750709   18444 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 21:14:15.760620   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 21:14:15.892420   18444 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	* I0310 21:14:16.255217   18444 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 21:14:16.515128   18444 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1866 bytes)
	* I0310 21:14:15.768923   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (13.0641632s)
	* I0310 21:14:15.769623   19328 logs.go:122] Gathering logs for kubelet ...
	* I0310 21:14:15.769623   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	* I0310 21:14:13.031040   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:14.854875   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:16.109061   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:16.936554   18444 ssh_runner.go:149] Run: grep 192.168.49.97	control-plane.minikube.internal$ /etc/hosts
	* I0310 21:14:17.051613   18444 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:14:17.249339   18444 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496 for IP: 192.168.49.97
	* I0310 21:14:17.250054   18444 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 21:14:17.250374   18444 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 21:14:17.251142   18444 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\client.key
	* I0310 21:14:17.251452   18444 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key.b6188fac
	* I0310 21:14:17.251761   18444 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.key
	* I0310 21:14:17.253727   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 21:14:17.254281   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.254457   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 21:14:17.254818   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.254818   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 21:14:17.255513   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.255694   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 21:14:17.255953   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.256240   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 21:14:17.256649   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.256874   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 21:14:17.257184   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.257184   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 21:14:17.257607   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.257607   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 21:14:17.258151   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.258281   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 21:14:17.258570   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.258570   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 21:14:17.259035   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.259035   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 21:14:17.259503   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.259503   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 21:14:17.260201   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.260416   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 21:14:17.260745   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.260745   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 21:14:17.261286   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.261365   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 21:14:17.261697   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.261972   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 21:14:17.262248   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.262524   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 21:14:17.262881   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.262881   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 21:14:17.262881   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.264568   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 21:14:17.264932   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.264932   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 21:14:17.265561   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.265909   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 21:14:17.265909   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.265909   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 21:14:17.277114   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.277114   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 21:14:17.277440   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.277830   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 21:14:17.278449   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.278674   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 21:14:17.278997   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.278997   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 21:14:17.279584   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.279584   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 21:14:17.280544   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.280544   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 21:14:17.281012   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.281450   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 21:14:17.282006   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.282585   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 21:14:17.283107   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.283514   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 21:14:17.284122   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.284334   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 21:14:17.284876   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.284876   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 21:14:17.285871   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.286115   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 21:14:17.286504   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.286730   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 21:14:17.286919   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.287264   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 21:14:17.288137   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.288137   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 21:14:17.288701   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.289113   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 21:14:17.289652   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.289905   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 21:14:17.290450   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.291084   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 21:14:17.291910   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 21:14:17.292477   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 21:14:17.294042   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 21:14:17.302799   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 21:14:17.611552   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	* I0310 21:14:18.027309   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 21:14:18.697813   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	* I0310 21:14:19.297438   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 21:14:20.100491   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 21:14:20.663291   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 21:14:21.082385   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 21:14:21.506044   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 21:14:19.601863   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (3.8322451s)
	* I0310 21:14:19.666624   19328 logs.go:122] Gathering logs for etcd [81a39b1bd4f1] ...
	* I0310 21:14:19.666624   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 81a39b1bd4f1"
	* I0310 21:14:18.375635   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:19.414942   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:22.540932   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:21.964278   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 21:14:22.582588   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 21:14:23.269984   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 21:14:23.804639   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 21:14:24.325048   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 21:14:25.179814   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 21:14:25.526443   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 21:14:25.846574   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 21:14:25.659175   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:27.265850   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:26.544098   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 21:14:27.372232   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 21:14:28.020652   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 21:14:28.569180   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 21:14:29.187113   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 21:14:29.703522   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 21:14:29.940505   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 21:14:30.330901   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 21:14:30.761712   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 21:14:31.386556   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 21:14:30.236809   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": http2: server sent GOAWAY and closed the connection; LastStreamID=245, ErrCode=NO_ERROR, debug=""
	* I0310 21:14:30.518110   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:31.008764   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:31.515416   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:32.008526   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:32.511618   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:31.743984   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 21:14:32.308527   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 21:14:32.876666   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 21:14:33.393082   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 21:14:33.869324   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 21:14:34.519740   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 21:14:34.897144   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 21:14:35.399582   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 21:14:35.995819   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 21:14:36.971030   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 81a39b1bd4f1": (17.3044298s)
	* I0310 21:14:37.005997   19328 logs.go:122] Gathering logs for kube-scheduler [e63ae4a86183] ...
	* I0310 21:14:37.007012   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 e63ae4a86183"
	* I0310 21:14:33.006554   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:33.508835   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:34.006012   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:34.508808   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:35.008527   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:35.503061   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:36.012759   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:36.507550   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:37.022896   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:37.504077   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:36.567107   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 21:14:37.251245   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 21:14:37.644297   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 21:14:38.203137   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 21:14:38.693835   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 21:14:39.210694   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 21:14:39.971743   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 21:14:40.475216   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 21:14:41.320098   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 21:14:38.005885   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:38.516331   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:39.014402   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:39.512788   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:40.006823   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:40.506866   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:41.011998   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:41.501870   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:42.009052   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:42.508741   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:42.232973   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 21:14:42.908772   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 21:14:43.781716   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 21:14:44.696746   18444 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 21:14:45.390443   18444 ssh_runner.go:149] Run: openssl version
	* I0310 21:14:45.482437   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 21:14:45.651988   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 21:14:45.772783   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 21:14:45.778553   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 21:14:45.843480   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:46.174305   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 21:14:46.428570   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 21:14:43.012072   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:43.511188   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:44.009340   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:44.505571   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:45.007257   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:45.507791   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:46.009858   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:46.523931   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:47.010222   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:47.507744   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:46.576065   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 21:14:46.593310   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 21:14:46.698018   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:46.848356   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 21:14:47.115394   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 21:14:47.176091   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 21:14:47.198899   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 21:14:47.340380   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:47.513949   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 21:14:47.905928   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 21:14:47.951951   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 21:14:47.961932   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 21:14:48.057359   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:48.269099   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 21:14:48.532324   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:14:48.654474   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:14:48.663602   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:14:48.758655   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 21:14:49.078686   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 21:14:49.267316   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 21:14:49.349285   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 21:14:49.357390   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 21:14:49.519644   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:49.651422   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 21:14:49.804006   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 21:14:49.939077   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 21:14:49.948504   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 21:14:50.028179   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:50.158520   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 21:14:50.412166   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 21:14:50.462274   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 21:14:50.473068   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 21:14:50.538722   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:50.720604   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 21:14:50.888196   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 21:14:50.946236   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 21:14:50.957481   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 21:14:51.071753   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:51.194145   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 21:14:51.332505   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 21:14:51.426975   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 21:14:51.446515   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 21:14:52.195255   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 e63ae4a86183": (15.1882638s)
	* I0310 21:14:52.215689   19328 logs.go:122] Gathering logs for kube-controller-manager [f4f5dad286f7] ...
	* I0310 21:14:52.215689   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f4f5dad286f7"
	* I0310 21:14:48.005253   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:48.527681   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:49.007192   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:49.508468   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:50.009531   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:50.512473   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:51.020200   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:51.503133   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:52.006227   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:52.508045   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:51.614421   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:51.713063   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 21:14:51.995049   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 21:14:52.065898   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 21:14:52.075253   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 21:14:52.159413   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:52.288203   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 21:14:52.480443   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 21:14:52.576842   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 21:14:52.582610   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 21:14:52.655285   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:52.783089   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 21:14:53.084901   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 21:14:53.174625   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 21:14:53.191753   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 21:14:53.314555   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:53.501263   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 21:14:53.739028   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 21:14:53.886737   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 21:14:53.895999   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 21:14:53.999562   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:54.199190   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 21:14:54.407107   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 21:14:54.569537   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 21:14:54.578544   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 21:14:54.884058   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:55.136671   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 21:14:55.368519   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 21:14:55.472906   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 21:14:55.484842   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 21:14:55.577120   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:55.897875   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 21:14:56.171569   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 21:14:56.268744   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 21:14:56.287647   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 21:14:56.392836   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:53.008513   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:53.511441   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:54.014370   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:54.518560   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:55.021296   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:55.518974   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:56.009392   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:56.504675   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:57.009681   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:57.509160   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:56.624005   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 21:14:56.841062   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 21:14:56.942666   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 21:14:56.949144   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 21:14:57.118875   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:57.244198   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 21:14:57.313203   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 21:14:57.366950   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 21:14:57.372633   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 21:14:57.434836   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:57.572986   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 21:14:57.655427   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 21:14:57.706486   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 21:14:57.721801   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 21:14:57.790260   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:57.938162   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 21:14:58.054647   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 21:14:58.104137   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 21:14:58.111662   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 21:14:58.189212   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:58.263191   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 21:14:58.376771   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 21:14:58.425770   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 21:14:58.437632   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 21:14:58.616826   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:58.715770   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 21:14:58.834428   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 21:14:58.886671   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 21:14:58.899643   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 21:14:58.953679   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:59.059831   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 21:14:59.137108   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 21:14:59.192036   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 21:14:59.209924   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 21:14:59.320419   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:59.440461   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 21:14:59.513814   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 21:14:59.560093   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 21:14:59.570575   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 21:14:59.623821   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:59.712850   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 21:14:59.781824   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 21:14:59.823502   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 21:14:59.851978   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 21:14:59.975766   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:00.185448   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 21:15:00.328974   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 21:15:00.393268   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 21:15:00.402436   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 21:15:00.515467   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:00.720715   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 21:15:00.983291   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 21:15:01.092070   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 21:15:01.103938   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 21:15:01.263896   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:01.380016   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 21:14:57.278415   22316 docker.go:388] Took 76.118939 seconds to copy over tarball
	* I0310 21:14:57.286692   22316 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:14:01.539853    5564 out.go:340] unable to execute * 2021-03-10 21:12:48.402073 W | etcdserver: request "header:<ID:4428859353621656868 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.6\" mod_revision:422 > success:<request_put:<key:\"/registry/masterleases/172.17.0.6\" value_size:65 lease:4428859353621656866 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.6\" > >>" with result "size:16" took too long (405.3185ms) to execute
	: html/template:* 2021-03-10 21:12:48.402073 W | etcdserver: request "header:<ID:4428859353621656868 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.6\" mod_revision:422 > success:<request_put:<key:\"/registry/masterleases/172.17.0.6\" value_size:65 lease:4428859353621656866 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.6\" > >>" with result "size:16" took too long (405.3185ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:14:01.642911    5564 out.go:340] unable to execute * 2021-03-10 21:13:41.589820 W | etcdserver: request "header:<ID:4428859353621657007 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3d76781df4c8f5ae>" with result "size:41" took too long (151.1081ms) to execute
	: html/template:* 2021-03-10 21:13:41.589820 W | etcdserver: request "header:<ID:4428859353621657007 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3d76781df4c8f5ae>" with result "size:41" took too long (151.1081ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:14:01.662599    5564 out.go:340] unable to execute * 2021-03-10 21:13:42.213630 W | etcdserver: request "header:<ID:4428859353621657010 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.6\" mod_revision:431 > success:<request_put:<key:\"/registry/masterleases/172.17.0.6\" value_size:65 lease:4428859353621657006 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.6\" > >>" with result "size:16" took too long (155.0379ms) to execute
	: html/template:* 2021-03-10 21:13:42.213630 W | etcdserver: request "header:<ID:4428859353621657010 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.6\" mod_revision:431 > success:<request_put:<key:\"/registry/masterleases/172.17.0.6\" value_size:65 lease:4428859353621657006 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.6\" > >>" with result "size:16" took too long (155.0379ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:14:53.998561    5564 out.go:335] unable to parse "* I0310 21:12:42.446315   18444 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:12:42.446315   18444 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:14:54.012378    5564 out.go:335] unable to parse "* I0310 21:12:43.723542   18444 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.277228s)\n": template: * I0310 21:12:43.723542   18444 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.277228s)
	:1: function "json" not defined - returning raw string.
	E0310 21:14:54.066637    5564 out.go:335] unable to parse "* I0310 21:12:44.851133   18444 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:12:44.851133   18444 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:14:54.093005    5564 out.go:335] unable to parse "* I0310 21:12:45.911277   18444 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0601454s)\n": template: * I0310 21:12:45.911277   18444 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0601454s)
	:1: function "json" not defined - returning raw string.
	E0310 21:14:54.158623    5564 out.go:340] unable to execute * I0310 21:12:44.115201   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:12:44.115201   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:12:44.115201   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:54.180624    5564 out.go:335] unable to parse "* I0310 21:12:44.768085   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}\n": template: * I0310 21:12:44.768085   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:14:54.329264    5564 out.go:340] unable to execute * I0310 21:12:50.336532   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:12:50.336532   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:12:50.336532   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:54.338240    5564 out.go:335] unable to parse "* I0310 21:12:51.030067   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}\n": template: * I0310 21:12:51.030067   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:14:55.329338    5564 out.go:340] unable to execute * I0310 21:12:51.707710   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.707710   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.707710   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.350453    5564 out.go:340] unable to execute * I0310 21:12:51.708397   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.708397   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.708397   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.360549    5564 out.go:340] unable to execute * I0310 21:12:51.708896   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.708896   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.708896   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.373529    5564 out.go:340] unable to execute * I0310 21:12:51.732967   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.732967   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.732967   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.381359    5564 out.go:340] unable to execute * I0310 21:12:51.733363   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.733363   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.733363   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.393283    5564 out.go:340] unable to execute * I0310 21:12:51.733561   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.733561   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.733561   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.417499    5564 out.go:340] unable to execute * I0310 21:12:51.745735   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.745735   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.745735   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.480475    5564 out.go:340] unable to execute * I0310 21:12:51.813706   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.813706   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.813706   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.519582    5564 out.go:340] unable to execute * I0310 21:12:51.820771   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.820771   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.820771   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.540152    5564 out.go:340] unable to execute * I0310 21:12:51.827809   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.827809   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.827809   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.550929    5564 out.go:340] unable to execute * I0310 21:12:51.860588   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.860588   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.860588   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.558647    5564 out.go:340] unable to execute * I0310 21:12:51.868441   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.868441   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.868441   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.571470    5564 out.go:340] unable to execute * I0310 21:12:51.886285   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.886285   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.886285   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.579114    5564 out.go:340] unable to execute * I0310 21:12:51.908114   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.908114   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.908114   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.586123    5564 out.go:340] unable to execute * I0310 21:12:51.919653   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.919653   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.919653   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.593122    5564 out.go:340] unable to execute * I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.600114    5564 out.go:340] unable to execute * I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.607122    5564 out.go:340] unable to execute * I0310 21:12:51.945930   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.945930   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.945930   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.614119    5564 out.go:340] unable to execute * I0310 21:12:51.949437   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.949437   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.949437   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.623124    5564 out.go:340] unable to execute * I0310 21:12:51.952434   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.952434   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.952434   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.632388    5564 out.go:340] unable to execute * I0310 21:12:51.948834   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.948834   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.948834   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.642986    5564 out.go:340] unable to execute * I0310 21:12:51.953370   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.953370   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.953370   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.650790    5564 out.go:340] unable to execute * I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.658990    5564 out.go:340] unable to execute * I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.707764    5564 out.go:340] unable to execute * I0310 21:12:56.131918   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:12:56.131918   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:12:56.131918   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.818434    5564 out.go:340] unable to execute * I0310 21:12:54.774187   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:12:54.774187   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:12:54.774187   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.837432    5564 out.go:340] unable to execute * I0310 21:12:53.395822   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.662745s)
	: template: * I0310 21:12:53.395822   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.662745s)
	:1:102: executing "* I0310 21:12:53.395822   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.662745s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.849144    5564 out.go:340] unable to execute * I0310 21:12:53.485017   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6713142s)
	: template: * I0310 21:12:53.485017   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6713142s)
	:1:102: executing "* I0310 21:12:53.485017   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.6713142s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.864081    5564 out.go:340] unable to execute * I0310 21:12:53.496519   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7629602s)
	: template: * I0310 21:12:53.496519   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7629602s)
	:1:102: executing "* I0310 21:12:53.496519   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.7629602s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.873551    5564 out.go:340] unable to execute * I0310 21:12:53.512111   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6511935s)
	: template: * I0310 21:12:53.512111   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6511935s)
	:1:102: executing "* I0310 21:12:53.512111   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.6511935s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.884823    5564 out.go:340] unable to execute * I0310 21:12:53.541713   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7209444s)
	: template: * I0310 21:12:53.541713   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7209444s)
	:1:102: executing "* I0310 21:12:53.541713   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.7209444s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.894843    5564 out.go:340] unable to execute * I0310 21:12:53.590524   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8569656s)
	: template: * I0310 21:12:53.590524   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8569656s)
	:1:102: executing "* I0310 21:12:53.590524   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8569656s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.906834    5564 out.go:340] unable to execute * I0310 21:12:53.621272   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7934655s)
	: template: * I0310 21:12:53.621272   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7934655s)
	:1:102: executing "* I0310 21:12:53.621272   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.7934655s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.916837    5564 out.go:340] unable to execute * I0310 21:12:53.642607   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.664234s)
	: template: * I0310 21:12:53.642607   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.664234s)
	:1:102: executing "* I0310 21:12:53.642607   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.664234s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.943391    5564 out.go:340] unable to execute * I0310 21:12:53.668750   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8003122s)
	: template: * I0310 21:12:53.668750   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8003122s)
	:1:102: executing "* I0310 21:12:53.668750   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8003122s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.954353    5564 out.go:340] unable to execute * I0310 21:12:53.688746   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.9810395s)
	: template: * I0310 21:12:53.688746   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.9810395s)
	:1:102: executing "* I0310 21:12:53.688746   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.9810395s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.967989    5564 out.go:340] unable to execute * I0310 21:12:53.758147   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8047789s)
	: template: * I0310 21:12:53.758147   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8047789s)
	:1:102: executing "* I0310 21:12:53.758147   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8047789s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.980780    5564 out.go:340] unable to execute * I0310 21:12:53.759520   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8398693s)
	: template: * I0310 21:12:53.759520   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8398693s)
	:1:102: executing "* I0310 21:12:53.759520   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8398693s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:55.989802    5564 out.go:340] unable to execute * I0310 21:12:53.770222   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8401715s)
	: template: * I0310 21:12:53.770222   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8401715s)
	:1:102: executing "* I0310 21:12:53.770222   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8401715s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.006384    5564 out.go:340] unable to execute * I0310 21:12:53.785284   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8990014s)
	: template: * I0310 21:12:53.785284   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8990014s)
	:1:102: executing "* I0310 21:12:53.785284   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8990014s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.019494    5564 out.go:340] unable to execute * I0310 21:12:53.817210   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8868726s)
	: template: * I0310 21:12:53.817210   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8868726s)
	:1:102: executing "* I0310 21:12:53.817210   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8868726s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.036517    5564 out.go:340] unable to execute * I0310 21:12:53.818574   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0728419s)
	: template: * I0310 21:12:53.818574   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0728419s)
	:1:102: executing "* I0310 21:12:53.818574   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0728419s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.047807    5564 out.go:340] unable to execute * I0310 21:12:53.843754   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.13536s)
	: template: * I0310 21:12:53.843754   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.13536s)
	:1:102: executing "* I0310 21:12:53.843754   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.13536s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.058428    5564 out.go:340] unable to execute * I0310 21:12:53.883965   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1750717s)
	: template: * I0310 21:12:53.883965   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1750717s)
	:1:102: executing "* I0310 21:12:53.883965   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.1750717s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.069032    5564 out.go:340] unable to execute * I0310 21:12:53.958209   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0084306s)
	: template: * I0310 21:12:53.958209   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0084306s)
	:1:102: executing "* I0310 21:12:53.958209   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0084306s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.085027    5564 out.go:340] unable to execute * I0310 21:12:53.959831   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0089094s)
	: template: * I0310 21:12:53.959831   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0089094s)
	:1:102: executing "* I0310 21:12:53.959831   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0089094s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.106306    5564 out.go:340] unable to execute * I0310 21:12:53.966133   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0137022s)
	: template: * I0310 21:12:53.966133   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0137022s)
	:1:102: executing "* I0310 21:12:53.966133   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0137022s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.122584    5564 out.go:340] unable to execute * I0310 21:12:53.994078   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0407109s)
	: template: * I0310 21:12:53.994078   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0407109s)
	:1:102: executing "* I0310 21:12:53.994078   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0407109s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.140842    5564 out.go:340] unable to execute * I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0316693s)
	: template: * I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0316693s)
	:1:102: executing "* I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0316693s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.166441    5564 out.go:340] unable to execute * I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1018557s)
	: template: * I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1018557s)
	:1:102: executing "* I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.1018557s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.199760    5564 out.go:335] unable to parse "* I0310 21:12:56.743328   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}\n": template: * I0310 21:12:56.743328   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:14:56.243680    5564 out.go:340] unable to execute * I0310 21:12:58.267262   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:12:58.267262   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:12:58.267262   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.255528    5564 out.go:335] unable to parse "* I0310 21:12:59.007814   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}\n": template: * I0310 21:12:59.007814   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:14:56.311597    5564 out.go:340] unable to execute * I0310 21:13:00.416655   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:00.416655   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:00.416655   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.324890    5564 out.go:335] unable to parse "* I0310 21:13:01.050764   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}\n": template: * I0310 21:13:01.050764   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:14:56.703662    5564 out.go:340] unable to execute * I0310 21:12:57.476748   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.476748   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.476748   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.712318    5564 out.go:340] unable to execute * I0310 21:12:57.540705   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.540705   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.540705   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.724718    5564 out.go:340] unable to execute * I0310 21:12:57.562999   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.562999   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.562999   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.745821    5564 out.go:340] unable to execute * I0310 21:12:57.579023   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.579023   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.579023   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.756395    5564 out.go:340] unable to execute * I0310 21:12:57.590099   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.590099   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.590099   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.793007    5564 out.go:340] unable to execute * I0310 21:12:57.597202   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.597202   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.597202   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.807948    5564 out.go:340] unable to execute * I0310 21:12:57.601772   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.601772   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.601772   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.822354    5564 out.go:340] unable to execute * I0310 21:12:57.608041   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.608041   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.608041   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.838205    5564 out.go:340] unable to execute * I0310 21:12:57.617147   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.617147   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.617147   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.844055    5564 out.go:340] unable to execute * I0310 21:12:57.625602   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.625602   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.625602   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.876865    5564 out.go:340] unable to execute * I0310 21:12:58.501263   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0245158s)
	: template: * I0310 21:12:58.501263   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0245158s)
	:1:102: executing "* I0310 21:12:58.501263   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0245158s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.892428    5564 out.go:340] unable to execute * I0310 21:12:58.563355   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0226521s)
	: template: * I0310 21:12:58.563355   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0226521s)
	:1:102: executing "* I0310 21:12:58.563355   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0226521s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.901415    5564 out.go:340] unable to execute * I0310 21:12:58.637870   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0294542s)
	: template: * I0310 21:12:58.637870   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0294542s)
	:1:102: executing "* I0310 21:12:58.637870   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0294542s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.913851    5564 out.go:340] unable to execute * I0310 21:12:58.653923   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0281021s)
	: template: * I0310 21:12:58.653923   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0281021s)
	:1:102: executing "* I0310 21:12:58.653923   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0281021s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.936074    5564 out.go:340] unable to execute * I0310 21:12:58.679155   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0890575s)
	: template: * I0310 21:12:58.679155   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0890575s)
	:1:102: executing "* I0310 21:12:58.679155   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0890575s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.960393    5564 out.go:340] unable to execute * I0310 21:12:58.686201   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0690557s)
	: template: * I0310 21:12:58.686201   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0690557s)
	:1:102: executing "* I0310 21:12:58.686201   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0690557s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:56.986421    5564 out.go:340] unable to execute * I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1513309s)
	: template: * I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1513309s)
	:1:102: executing "* I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.1513309s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:57.002773    5564 out.go:340] unable to execute * I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1559017s)
	: template: * I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1559017s)
	:1:102: executing "* I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.1559017s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:57.334637    5564 out.go:340] unable to execute * I0310 21:13:02.971839   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:02.971839   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:02.971839   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:57.362842    5564 out.go:335] unable to parse "* I0310 21:13:03.617620   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}\n": template: * I0310 21:13:03.617620   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:14:57.403600    5564 out.go:340] unable to execute * I0310 21:13:04.758785   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:04.758785   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:04.758785   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:57.428830    5564 out.go:340] unable to execute * I0310 21:13:05.285833   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:05.285833   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:05.285833   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:57.461833    5564 out.go:340] unable to execute * I0310 21:13:07.838772   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:07.838772   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:07.838772   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:57.472506    5564 out.go:335] unable to parse "* I0310 21:13:08.439026   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}\n": template: * I0310 21:13:08.439026   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:14:57.616072    5564 out.go:340] unable to execute * I0310 21:13:10.751385   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:10.751385   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:10.751385   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:57.667380    5564 out.go:340] unable to execute * I0310 21:13:16.120584   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:16.120584   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:16.120584   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:57.686408    5564 out.go:340] unable to execute * I0310 21:13:15.101929   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:15.101929   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:15.101929   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:57.704933    5564 out.go:335] unable to parse "* I0310 21:13:16.714294   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}\n": template: * I0310 21:13:16.714294   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:14:57.737053    5564 out.go:340] unable to execute * I0310 21:13:18.742632   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:18.742632   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:18.742632   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:57.747482    5564 out.go:335] unable to parse "* I0310 21:13:19.364241   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}\n": template: * I0310 21:13:19.364241   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:14:58.210203    5564 out.go:340] unable to execute * I0310 21:13:21.308894   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:21.308894   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:21.308894   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.230497    5564 out.go:340] unable to execute * I0310 21:13:18.512811   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:18.512811   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:18.512811   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.248211    5564 out.go:340] unable to execute * I0310 21:13:19.466948   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:19.466948   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:19.466948   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.272206    5564 out.go:335] unable to parse "* I0310 21:13:21.935488   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}\n": template: * I0310 21:13:21.935488   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:14:58.312787    5564 out.go:340] unable to execute * I0310 21:13:24.039903   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:24.039903   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:24.039903   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.762782    5564 out.go:340] unable to execute * I0310 21:13:25.645133   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:25.645133   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:25.645133   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.781501    5564 out.go:340] unable to execute * I0310 21:13:22.321052   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:22.321052   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:22.321052   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.811431    5564 out.go:340] unable to execute * I0310 21:13:22.814336   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:22.814336   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:22.814336   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.856476    5564 out.go:340] unable to execute * I0310 21:13:25.102916   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:25.102916   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:25.102916   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.888080    5564 out.go:340] unable to execute * I0310 21:13:25.214321   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:25.214321   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:25.214321   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.898266    5564 out.go:340] unable to execute * I0310 21:13:25.217320   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:25.217320   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:25.217320   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.931360    5564 out.go:340] unable to execute * I0310 21:13:27.894704   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:27.894704   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:27.894704   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.977816    5564 out.go:340] unable to execute * I0310 21:13:30.096865   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:30.096865   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:30.096865   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:58.988620    5564 out.go:340] unable to execute * I0310 21:13:30.111612   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:30.111612   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:30.111612   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:59.067844    5564 out.go:340] unable to execute * I0310 21:13:29.248980   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:29.248980   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:29.248980   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:59.106267    5564 out.go:340] unable to execute * I0310 21:13:31.070658   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:31.070658   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:31.070658   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:59.113267    5564 out.go:340] unable to execute * I0310 21:13:31.078759   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:31.078759   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:31.078759   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:59.308249    5564 out.go:340] unable to execute * I0310 21:13:39.488453   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:39.488453   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:39.488453   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:14:59.501817    5564 out.go:340] unable to execute * I0310 21:14:08.879534   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:14:08.879534   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:14:08.879534   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-20210310201637-6496 -n kubernetes-upgrade-20210310201637-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-20210310201637-6496 -n kubernetes-upgrade-20210310201637-6496: (9.835977s)
helpers_test.go:257: (dbg) Run:  kubectl --context kubernetes-upgrade-20210310201637-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context kubernetes-upgrade-20210310201637-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20210310201637-6496 describe pod : exit status 1 (228.9764ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context kubernetes-upgrade-20210310201637-6496 describe pod : exit status 1
helpers_test.go:171: Cleaning up "kubernetes-upgrade-20210310201637-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20210310201637-6496

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20210310201637-6496: (31.5702542s)
--- FAIL: TestKubernetesUpgrade (3548.65s)

                                                
                                    
x
+
TestMissingContainerUpgrade (3566.06s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:305: (dbg) Run:  C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.1.748468123.exe start -p missing-upgrade-20210310201637-6496 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:305: (dbg) Non-zero exit: C:\Users\jenkins\AppData\Local\Temp\minikube-v1.9.1.748468123.exe start -p missing-upgrade-20210310201637-6496 --memory=2200 --driver=docker: exit status 1 (54m59.3367953s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20210310201637-6496] minikube v1.9.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20210310201637-6496
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=2200MB (20001MB available) ...
	* Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
	  - kubeadm.pod-network-cidr=10.244.0.0/16

                                                
                                                
-- /stdout --
** stderr ** 
	! Executing "docker inspect -f {{.State.Status}} kubernetes-upgrade-20210310201637-6496" took an unusually long time: 15.0171338s
	* Restarting the docker service may improve performance.
	E0310 20:50:43.424553    3032 cache_images.go:196] error getting status for kubernetes-upgrade-20210310201637-6496: state: "docker inspect -f {{.State.Status}} kubernetes-upgrade-20210310201637-6496" timed out after 15s
	E0310 21:06:22.621982    3032 cache_images.go:186] Failed to load profile "nospam-20210310201637-6496": cluster "nospam-20210310201637-6496" does not exist
	E0310 21:06:22.621982    3032 cache_images.go:186] Failed to load profile "offline-docker-20210310201637-6496": cluster "offline-docker-20210310201637-6496" does not exist
	E0310 21:06:22.621982    3032 cache_images.go:186] Failed to load profile "pause-20210310201637-6496": cluster "pause-20210310201637-6496" does not exist

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: release start failed: exit status 1

                                                
                                                
=== CONT  TestMissingContainerUpgrade
panic.go:617: *** TestMissingContainerUpgrade FAILED at 2021-03-10 21:11:38.3276164 +0000 GMT m=+7638.034830601
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:227: (dbg) Run:  docker inspect missing-upgrade-20210310201637-6496
helpers_test.go:231: (dbg) docker inspect missing-upgrade-20210310201637-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "48ce57e7600b7b61c500fc1034576e34a7846c19647a8b95e7d8de295f9e6cb0",
	        "Created": "2021-03-10T20:16:51.1499464Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 123570,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:16:58.0389549Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/48ce57e7600b7b61c500fc1034576e34a7846c19647a8b95e7d8de295f9e6cb0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/48ce57e7600b7b61c500fc1034576e34a7846c19647a8b95e7d8de295f9e6cb0/hostname",
	        "HostsPath": "/var/lib/docker/containers/48ce57e7600b7b61c500fc1034576e34a7846c19647a8b95e7d8de295f9e6cb0/hosts",
	        "LogPath": "/var/lib/docker/containers/48ce57e7600b7b61c500fc1034576e34a7846c19647a8b95e7d8de295f9e6cb0/48ce57e7600b7b61c500fc1034576e34a7846c19647a8b95e7d8de295f9e6cb0-json.log",
	        "Name": "/missing-upgrade-20210310201637-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "missing-upgrade-20210310201637-6496:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/399c8b16ec3a744fa627f794e24d5e3be95e698bd05c7a8a706bcc764006021f-init/diff:/var/lib/docker/overlay2/2729176fde680c6cb08937895942a79a6e1c51f76a7e3d5a84446500dc7b8558/diff:/var/lib/docker/overlay2/7c9c28574388f9d00d1444994b9fac28dd5229be9db522b8fe1138f91c318581/diff:/var/lib/docker/overlay2/bfe3c6f5e76dfddfd1396f0c4d324e6dc9b79481460c422e870f3aed43ae3cf3/diff:/var/lib/docker/overlay2/30be4e7f7ffba8ec1459948336cdb1dca09bfe65ab6553e80ef07bdeb527ec2a/diff:/var/lib/docker/overlay2/36b90e9a4385265e460c6307dbdd0daa90a2fdeaacfe20bbef2bad23b55b4d2e/diff:/var/lib/docker/overlay2/00989c545c01fb65b678eeafb78d6a826c0ea44eac1434d39208fc61f165e39c/diff:/var/lib/docker/overlay2/bec75261a3574da4507178c8c4be6e1fe2fe025adb8409d6135c2a51e7b055c3/diff:/var/lib/docker/overlay2/3e7fcf6b9f34e816ba41539a5349e02d557546a72903bae043a02043363d438c/diff:/var/lib/docker/overlay2/acb3a94565839ba4000a418d126e0460fecbbee87faff0e780621158cb84d9f9/diff:/var/lib/docker/overlay2/852e92
13d6c3eccb639a71fc37949c07c1a61da4dd20607d61913aff9225a6e0/diff:/var/lib/docker/overlay2/de25f76aa2370f98adb37e218d204f48446a17e2088c78db6acb5861eb48ed50/diff:/var/lib/docker/overlay2/702c953985963f07f226317b76356bf7129c55d6a6e875db66ae6a265b517fa2/diff:/var/lib/docker/overlay2/a8469dfed234d672c3c9c64eceea846bd579dd70f971f5b8ab0c054ceb0cf632/diff:/var/lib/docker/overlay2/1cefd8009a19e5ea459a4780caf76e5c64c903502e8f88ba66d3ce31345a23bc/diff:/var/lib/docker/overlay2/899a9e14d45cfb2e470d1b6815c0f2fed6fe21debecf41c8d4c8722c4b747539/diff:/var/lib/docker/overlay2/c31da9bfc2aec7af749ca6c7314ebccfba8c3ea4bb448085be52da55bf8e9d6c/diff:/var/lib/docker/overlay2/bd4bc4fdf15943a4aaa627f39b2cb34c636430c6c59b009f1d0f22c237e054d8/diff:/var/lib/docker/overlay2/8d04bed8e06519ceda02f8b604c41bd7614c4b7ee962d0d857f125dd1b87ad9b/diff:/var/lib/docker/overlay2/5d99a9bc3a0f4170e7dbfad9ac526ced98e84efe73cf5ac92c0f2813c82eae3f/diff:/var/lib/docker/overlay2/6d5b862d76f0de854698cdcfc250f5ee5afacb30fb276e4d37bf82813c1f7cf5/diff:/var/lib/d
ocker/overlay2/2b5c29e75384b18c2a6f85cb026a640ee3ba7885ae74bdde2722709a474a19b6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/399c8b16ec3a744fa627f794e24d5e3be95e698bd05c7a8a706bcc764006021f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/399c8b16ec3a744fa627f794e24d5e3be95e698bd05c7a8a706bcc764006021f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/399c8b16ec3a744fa627f794e24d5e3be95e698bd05c7a8a706bcc764006021f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20210310201637-6496",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20210310201637-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20210310201637-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20210310201637-6496",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20210310201637-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be8d678c3efd199418fd2cec73378dc12e98facf3218cd43065b8c3f554f10c8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55080"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/be8d678c3efd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "972ca9309e2f563dfb85133ae4ff9ae200fd9f08eee4aa9f9dd1123d888e9ad4",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "972ca9309e2f563dfb85133ae4ff9ae200fd9f08eee4aa9f9dd1123d888e9ad4",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20210310201637-6496 -n missing-upgrade-20210310201637-6496

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p missing-upgrade-20210310201637-6496 -n missing-upgrade-20210310201637-6496: (10.1074964s)
helpers_test.go:240: <<< TestMissingContainerUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestMissingContainerUpgrade]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p missing-upgrade-20210310201637-6496 logs -n 25

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p missing-upgrade-20210310201637-6496 logs -n 25: exit status 110 (3m46.8564072s)

                                                
                                                
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:17:01 UTC, end at Wed 2021-03-10 21:13:06 UTC. --
	* Mar 10 20:44:04 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:44:04.420811100Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a9617070dbff3f5cd429c63e372a5cab06ac5d3a4ceb30f8fe3f8279aa548a4e.sock debug=false pid=9580
	* Mar 10 20:44:56 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:44:56.976023600Z" level=info msg="shim reaped" id=6c4dcfc6ff2fe6fa6d57f0941ea75a990db37d672b1f2ae39e5f7753d72c13e4
	* Mar 10 20:44:56 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:44:56.977901000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:44:56 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:44:56.978225900Z" level=warning msg="6c4dcfc6ff2fe6fa6d57f0941ea75a990db37d672b1f2ae39e5f7753d72c13e4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/6c4dcfc6ff2fe6fa6d57f0941ea75a990db37d672b1f2ae39e5f7753d72c13e4/mounts/shm, flags: 0x2: no such file or directory"
	* Mar 10 20:45:17 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:45:17.620428400Z" level=info msg="shim containerd-shim started" address=/containerd-shim/aaa5fa5776d38194e741ef0bce15080bcd4decefcfed6f0bf6dcf81e340aaecc.sock debug=false pid=9778
	* Mar 10 20:47:25 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:47:25.789943200Z" level=info msg="shim reaped" id=f67ad6c90f39c66da1894c9f1d4de8c937f08df927068a6fbde6864fef599119
	* Mar 10 20:47:25 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:47:25.868336200Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:47:25 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:47:25.907826100Z" level=warning msg="f67ad6c90f39c66da1894c9f1d4de8c937f08df927068a6fbde6864fef599119 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f67ad6c90f39c66da1894c9f1d4de8c937f08df927068a6fbde6864fef599119/mounts/shm, flags: 0x2: no such file or directory"
	* Mar 10 20:47:52 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:47:52.747625600Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bf0874b69c91d36ec61bb7a424ea8d3d188d206cf414e3499a5071cda115b267.sock debug=false pid=10114
	* Mar 10 20:52:22 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:52:22.449351400Z" level=info msg="shim reaped" id=e51a4d035ea663c6872629fef59d697f4aab75ca356c1d90359a3836f9e0ab83
	* Mar 10 20:52:22 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:52:22.561112400Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:52:22 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:52:22.575210900Z" level=warning msg="e51a4d035ea663c6872629fef59d697f4aab75ca356c1d90359a3836f9e0ab83 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e51a4d035ea663c6872629fef59d697f4aab75ca356c1d90359a3836f9e0ab83/mounts/shm, flags: 0x2: no such file or directory"
	* Mar 10 20:52:23 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:52:23.115109700Z" level=info msg="shim reaped" id=d2867eba344c4e2fb6f4b2976d7e4d1ee63166c83fc5e89ce2d7f050800c9d48
	* Mar 10 20:52:23 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:52:23.437969100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:52:23 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:52:23.438235500Z" level=warning msg="d2867eba344c4e2fb6f4b2976d7e4d1ee63166c83fc5e89ce2d7f050800c9d48 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d2867eba344c4e2fb6f4b2976d7e4d1ee63166c83fc5e89ce2d7f050800c9d48/mounts/shm, flags: 0x2: no such file or directory"
	* Mar 10 20:53:00 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:53:00.757006000Z" level=info msg="shim containerd-shim started" address=/containerd-shim/699834eacbb62f4765b983c103cd7cb2ea50dd0c702485c605559e2d3c7651bc.sock debug=false pid=11269
	* Mar 10 20:53:10 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:53:10.204701400Z" level=info msg="shim containerd-shim started" address=/containerd-shim/57ddb79898c5561844b7d376dc332a161e412e090f7ab81d310e714801daf91b.sock debug=false pid=11296
	* Mar 10 20:55:52 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:55:52.497528100Z" level=info msg="shim reaped" id=1b3822abb4bd44a4a28568f58b3d515cc36be32fa2615cdd746a2c31b5c4d3b8
	* Mar 10 20:55:52 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:55:52.709578400Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:55:52 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:55:52.727090900Z" level=warning msg="1b3822abb4bd44a4a28568f58b3d515cc36be32fa2615cdd746a2c31b5c4d3b8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/1b3822abb4bd44a4a28568f58b3d515cc36be32fa2615cdd746a2c31b5c4d3b8/mounts/shm, flags: 0x2: no such file or directory"
	* Mar 10 20:56:25 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:56:25.179360200Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4bd617251d16e18ff47034d0f2c6a3fb3f1e5b0de9e93190c4db9ac44c618eac.sock debug=false pid=11689
	* Mar 10 20:59:05 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:59:05.678236600Z" level=info msg="shim reaped" id=91345ca66b20337d4801d1fba1b04a7c1a2a7dbda7c7b1f9fd0806064c6bccdd
	* Mar 10 20:59:05 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:59:05.714539200Z" level=warning msg="91345ca66b20337d4801d1fba1b04a7c1a2a7dbda7c7b1f9fd0806064c6bccdd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/91345ca66b20337d4801d1fba1b04a7c1a2a7dbda7c7b1f9fd0806064c6bccdd/mounts/shm, flags: 0x2: no such file or directory"
	* Mar 10 20:59:05 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:59:05.715981000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 20:59:15 missing-upgrade-20210310201637-6496 dockerd[510]: time="2021-03-10T20:59:15.959851100Z" level=info msg="shim containerd-shim started" address=/containerd-shim/ea1687040ef70023cb5801030c0cef16d6730bc2096ee34930341b55b3fe4727.sock debug=false pid=12868
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 216ab7b1b4235       a31f78c7c8ce1       14 minutes ago      Running             kube-scheduler            4                   c0c08a447f7cf
	* 984ddfe94ab92       d3e55153f52fb       17 minutes ago      Running             kube-controller-manager   5                   8bab31cce1290
	* 91345ca66b203       a31f78c7c8ce1       20 minutes ago      Exited              kube-scheduler            3                   c0c08a447f7cf
	* 61e43fe126276       67da37a9a360e       44 minutes ago      Running             coredns                   0                   c09cd8f87c8f4
	* fde5da8b2edf1       67da37a9a360e       44 minutes ago      Running             coredns                   0                   e180152c1f669
	* 34d0135bfde5b       43940c34f24f3       44 minutes ago      Running             kube-proxy                0                   eacfaa34581cb
	* 786932ac92ee4       aa67fec7d7ef7       44 minutes ago      Running             kindnet-cni               0                   8e091b68f140d
	* 6b670bde97e2c       303ce5db0e90d       48 minutes ago      Running             etcd                      0                   b75de1b217805
	* 5451e73fc7682       a31f78c7c8ce1       48 minutes ago      Running             kube-scheduler            0                   c0c08a447f7cf
	* 7857290de2f18       74060cea7f704       48 minutes ago      Running             kube-apiserver            0                   f94fbc4a2b8ec
	* aac66ca2a5108       74060cea7f704       48 minutes ago      Created             kube-apiserver            0                   f94fbc4a2b8ec
	* 4f764004a52ab       303ce5db0e90d       48 minutes ago      Created             etcd                      0                   b75de1b217805
	* 
	* ==> coredns [61e43fe12627] <==
	* I0310 20:30:31.317072       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-03-10 20:30:10.2838716 +0000 UTC m=+0.653849901) (total time: 21.0118843s):
	* Trace[2019727887]: [21.0118843s] [21.0118843s] END
	* E0310 20:30:31.317135       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 20:30:31.318250       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-03-10 20:30:10.29669 +0000 UTC m=+0.666668101) (total time: 21.0215328s):
	* Trace[1427131847]: [21.0215328s] [21.0215328s] END
	* E0310 20:30:31.318267       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 20:30:31.323803       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-03-10 20:30:10.2917487 +0000 UTC m=+0.661726701) (total time: 21.0320266s):
	* Trace[939984059]: [21.0320266s] [21.0320266s] END
	* E0310 20:30:31.323819       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	* CoreDNS-1.6.7
	* linux/amd64, go1.13.6, da7f65b
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* 
	* ==> coredns [fde5da8b2edf] <==
	* I0310 20:30:13.621965       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-03-10 20:29:52.6143646 +0000 UTC m=+0.636508801) (total time: 21.0078655s):
	* Trace[2019727887]: [21.0078655s] [21.0078655s] END
	* E0310 20:30:13.622033       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 20:30:13.622347       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-03-10 20:29:52.6156098 +0000 UTC m=+0.637754001) (total time: 21.0074018s):
	* Trace[1427131847]: [21.0074018s] [21.0074018s] END
	* E0310 20:30:13.622363       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 20:30:13.670028       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-03-10 20:29:52.615423 +0000 UTC m=+0.637567201) (total time: 21.0222494s):
	* Trace[939984059]: [21.0222494s] [21.0222494s] END
	* E0310 20:30:13.776250       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 20:30:35.632052       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-03-10 20:30:14.6226409 +0000 UTC m=+22.645471801) (total time: 21.009324s):
	* Trace[911902081]: [21.009324s] [21.009324s] END
	* E0310 20:30:35.632088       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 20:30:35.715935       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-03-10 20:30:14.6592005 +0000 UTC m=+22.682031501) (total time: 21.0566823s):
	* Trace[1474941318]: [21.0566823s] [21.0566823s] END
	* E0310 20:30:35.715968       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 20:30:35.921994       1 trace.go:116] Trace[140954425]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2021-03-10 20:30:14.9129678 +0000 UTC m=+22.935798801) (total time: 21.0089802s):
	* Trace[140954425]: [21.0089802s] [21.0089802s] END
	* E0310 20:30:35.922029       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	* CoreDNS-1.6.7
	* linux/amd64, go1.13.6, da7f65b
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* 
	* ==> describe nodes <==
	* Name:               missing-upgrade-20210310201637-6496
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=missing-upgrade-20210310201637-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=d8747aec7ebf8332ddae276d5f8fb42d3152b5a1
	*                     minikube.k8s.io/name=missing-upgrade-20210310201637-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T20_27_13_0700
	*                     minikube.k8s.io/version=v1.9.1
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 20:26:36 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  missing-upgrade-20210310201637-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 21:13:55 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 21:09:49 +0000   Wed, 10 Mar 2021 20:26:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 21:09:49 +0000   Wed, 10 Mar 2021 20:26:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 21:09:49 +0000   Wed, 10 Mar 2021 20:26:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 21:09:49 +0000   Wed, 10 Mar 2021 21:04:45 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.2
	*   Hostname:    missing-upgrade-20210310201637-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 7c0da197ea194f389a87a0dd6574ac64
	*   System UUID:                78ec4110-44df-4818-9b79-cb20e162cea7
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 19.10
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.2
	*   Kubelet Version:            v1.18.0
	*   Kube-Proxy Version:         v1.18.0
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (8 in total)
	*   Namespace                   Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                           ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-66bff467f8-9xcz4                                       100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     46m
	*   kube-system                 coredns-66bff467f8-m5cwx                                       100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     46m
	*   kube-system                 etcd-missing-upgrade-20210310201637-6496                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         46m
	*   kube-system                 kindnet-88p5j                                                  100m (2%)     100m (2%)   50Mi (0%)        50Mi (0%)      46m
	*   kube-system                 kube-apiserver-missing-upgrade-20210310201637-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         46m
	*   kube-system                 kube-controller-manager-missing-upgrade-20210310201637-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         46m
	*   kube-system                 kube-proxy-77xdh                                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         46m
	*   kube-system                 kube-scheduler-missing-upgrade-20210310201637-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         46m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                850m (21%)  100m (2%)
	*   memory             190Mi (0%)  390Mi (1%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                  From                                             Message
	*   ----    ------                   ----                 ----                                             -------
	*   Normal  NodeHasSufficientPID     47m (x8 over 48m)    kubelet, missing-upgrade-20210310201637-6496     Node missing-upgrade-20210310201637-6496 status is now: NodeHasSufficientPID
	*   Normal  Starting                 46m                  kubelet, missing-upgrade-20210310201637-6496     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  46m                  kubelet, missing-upgrade-20210310201637-6496     Node missing-upgrade-20210310201637-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    46m                  kubelet, missing-upgrade-20210310201637-6496     Node missing-upgrade-20210310201637-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     46m                  kubelet, missing-upgrade-20210310201637-6496     Node missing-upgrade-20210310201637-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  46m                  kubelet, missing-upgrade-20210310201637-6496     Updated Node Allocatable limit across pods
	*   Normal  Starting                 43m                  kube-proxy, missing-upgrade-20210310201637-6496  Starting kube-proxy.
	*   Normal  NodeNotReady             9m46s (x2 over 46m)  kubelet, missing-upgrade-20210310201637-6496     Node missing-upgrade-20210310201637-6496 status is now: NodeNotReady
	*   Normal  NodeReady                9m22s (x2 over 46m)  kubelet, missing-upgrade-20210310201637-6496     Node missing-upgrade-20210310201637-6496 status is now: NodeReady
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [4f764004a52a] <==
	* 
	* ==> etcd [6b670bde97e2] <==
	* 2021-03-10 21:13:11.874450 W | wal: sync duration of 1.3542562s, expected less than 1s
	* 2021-03-10 21:13:12.614725 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts\" range_end:\"/registry/serviceaccountt\" count_only:true " with result "range_response_count:0 size:7" took too long (850.492ms) to execute
	* 2021-03-10 21:13:12.650237 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:286" took too long (2.0151488s) to execute
	* 2021-03-10 21:13:12.658249 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:7" took too long (2.0307765s) to execute
	* 2021-03-10 21:13:12.703245 W | etcdserver: read-only range request "key:\"/registry/leases\" range_end:\"/registry/leaset\" count_only:true " with result "range_response_count:0 size:7" took too long (2.0761215s) to execute
	* 2021-03-10 21:13:12.823059 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-9xcz4\" " with result "range_response_count:1 size:4392" took too long (943.7398ms) to execute
	* 2021-03-10 21:13:12.923096 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (123.1956ms) to execute
	* 2021-03-10 21:13:16.758042 W | etcdserver: read-only range request "key:\"/registry/clusterroles\" range_end:\"/registry/clusterrolet\" count_only:true " with result "range_response_count:0 size:7" took too long (123.9416ms) to execute
	* 2021-03-10 21:13:17.102112 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5826" took too long (111.0707ms) to execute
	* 2021-03-10 21:13:17.643256 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-scheduler-missing-upgrade-20210310201637-6496.166b15d39caa128c\" " with result "range_response_count:1 size:954" took too long (299.9319ms) to execute
	* 2021-03-10 21:13:20.234353 W | etcdserver: request "header:<ID:13557092847739346247 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-missing-upgrade-20210310201637-6496.166b159a17f37f6c\" mod_revision:2179 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-missing-upgrade-20210310201637-6496.166b159a17f37f6c\" value_size:852 lease:4333720810884570366 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-controller-manager-missing-upgrade-20210310201637-6496.166b159a17f37f6c\" > >>" with result "size:16" took too long (131.6295ms) to execute
	* 2021-03-10 21:13:21.416451 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (177.2451ms) to execute
	* 2021-03-10 21:13:21.417942 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:129" took too long (307.0584ms) to execute
	* 2021-03-10 21:13:25.712593 W | etcdserver: request "header:<ID:13557092847739346270 > lease_revoke:<id:3c24781dd07e1d24>" with result "size:28" took too long (168.4974ms) to execute
	* 2021-03-10 21:13:30.788330 W | etcdserver: read-only range request "key:\"/registry/masterleases/172.17.0.2\" " with result "range_response_count:1 size:129" took too long (388.1451ms) to execute
	* 2021-03-10 21:13:31.602002 W | etcdserver: read-only range request "key:\"/registry/configmaps\" range_end:\"/registry/configmapt\" count_only:true " with result "range_response_count:0 size:7" took too long (274.8408ms) to execute
	* 2021-03-10 21:13:31.774122 W | etcdserver: request "header:<ID:13557092847739346284 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.2\" mod_revision:3722 > success:<request_put:<key:\"/registry/masterleases/172.17.0.2\" value_size:65 lease:4333720810884570474 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.2\" > >>" with result "size:16" took too long (180.2983ms) to execute
	* 2021-03-10 21:13:32.037622 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (108.9644ms) to execute
	* 2021-03-10 21:13:34.743959 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses\" range_end:\"/registry/runtimeclasset\" count_only:true " with result "range_response_count:0 size:5" took too long (488.1912ms) to execute
	* 2021-03-10 21:13:40.950551 W | etcdserver: request "header:<ID:13557092847739346307 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.2\" mod_revision:3728 > success:<request_put:<key:\"/registry/masterleases/172.17.0.2\" value_size:65 lease:4333720810884570497 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.2\" > >>" with result "size:16" took too long (132.6726ms) to execute
	* 2021-03-10 21:13:41.264086 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:129" took too long (177.2717ms) to execute
	* 2021-03-10 21:13:51.698086 W | etcdserver: request "header:<ID:13557092847739346327 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3c24781dd07e1d96>" with result "size:40" took too long (902.9452ms) to execute
	* 2021-03-10 21:13:51.744678 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (205.5036ms) to execute
	* 2021-03-10 21:13:59.290407 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations\" range_end:\"/registry/mutatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (318.0296ms) to execute
	* 2021-03-10 21:14:21.273553 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:286" took too long (223.0418ms) to execute
	* 
	* ==> kernel <==
	*  21:14:27 up  2:14,  0 users,  load average: 170.86, 161.19, 148.96
	* Linux missing-upgrade-20210310201637-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 19.10"
	* 
	* ==> kube-apiserver [7857290de2f1] <==
	* Trace[954161059]: [1.9798146s] [160.9781ms] Object stored in database
	* I0310 21:13:23.260155       1 trace.go:116] Trace[1328738335]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2021-03-10 21:13:21.7514146 +0000 UTC m=+2896.784317601) (total time: 1.5086863s):
	* Trace[1328738335]: [1.5083495s] [1.4116794s] Transaction committed
	* I0310 21:13:23.341542       1 trace.go:116] Trace[615876581]: "Patch" url:/api/v1/namespaces/kube-system/pods/coredns-66bff467f8-9xcz4/status,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/9e99141,client:172.17.0.2 (started: 2021-03-10 21:13:21.4143054 +0000 UTC m=+2896.447208301) (total time: 1.9271424s):
	* Trace[615876581]: [317.1277ms] [317.1277ms] Recorded the audit event
	* Trace[615876581]: [432.3702ms] [94.9925ms] About to check admission control
	* Trace[615876581]: [1.8458998s] [1.4135296s] Object stored in database
	* I0310 21:13:24.470655       1 trace.go:116] Trace[755103936]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-scheduler-missing-upgrade-20210310201637-6496.166b15d39caa128c,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/9e99141,client:172.17.0.2 (started: 2021-03-10 21:13:23.5239451 +0000 UTC m=+2898.556848101) (total time: 946.6251ms):
	* Trace[755103936]: [543.4731ms] [543.4731ms] Recorded the audit event
	* Trace[755103936]: [672.7992ms] [129.3261ms] About to apply patch
	* Trace[755103936]: [946.4717ms] [227.8655ms] Object stored in database
	* I0310 21:13:31.798569       1 trace.go:116] Trace[323138300]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2021-03-10 21:13:30.3950855 +0000 UTC m=+2905.427988501) (total time: 1.4033962s):
	* Trace[323138300]: [486.5556ms] [486.5556ms] initial value restored
	* Trace[323138300]: [932.1669ms] [445.6113ms] Transaction prepared
	* Trace[323138300]: [1.4033426s] [471.1757ms] Transaction committed
	* I0310 21:13:51.824784       1 trace.go:116] Trace[462256863]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2021-03-10 21:13:50.5505756 +0000 UTC m=+2925.583343501) (total time: 1.2740752s):
	* Trace[462256863]: [136.309ms] [136.309ms] initial value restored
	* Trace[462256863]: [1.2272691s] [1.0909601s] Transaction prepared
	* I0310 21:13:52.772618       1 trace.go:116] Trace[84445848]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/9e99141,client:127.0.0.1 (started: 2021-03-10 21:13:51.8268353 +0000 UTC m=+2926.859602901) (total time: 945.6884ms):
	* Trace[84445848]: [944.8805ms] [944.8572ms] About to write a response
	* I0310 21:13:57.043615       1 trace.go:116] Trace[1617874957]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/missing-upgrade-20210310201637-6496,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/9e99141,client:172.17.0.2 (started: 2021-03-10 21:13:55.8253492 +0000 UTC m=+2930.858116801) (total time: 1.2181837s):
	* Trace[1617874957]: [778.5242ms] [778.5242ms] About to convert to expected version
	* Trace[1617874957]: [1.2003529s] [411.8705ms] Object stored in database
	* I0310 21:14:22.011395       1 trace.go:116] Trace[487197696]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/9e99141,client:127.0.0.1 (started: 2021-03-10 21:14:20.8267023 +0000 UTC m=+2955.858624001) (total time: 1.1845481s):
	* Trace[487197696]: [1.184421s] [1.1844067s] About to write a response
	* 
	* ==> kube-apiserver [aac66ca2a510] <==
	* 
	* ==> kube-controller-manager [984ddfe94ab9] <==
	* 
	* ==> kube-proxy [34d0135bfde5] <==
	* W0310 20:30:07.572532       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	* I0310 20:30:09.733075       1 node.go:136] Successfully retrieved node IP: 172.17.0.2
	* I0310 20:30:09.733148       1 server_others.go:186] Using iptables Proxier.
	* I0310 20:30:09.756241       1 server.go:583] Version: v1.18.0
	* I0310 20:30:10.034234       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 20:30:10.034581       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 20:30:10.034644       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 20:30:10.196010       1 config.go:315] Starting service config controller
	* I0310 20:30:10.196065       1 shared_informer.go:223] Waiting for caches to sync for service config
	* I0310 20:30:10.400192       1 config.go:133] Starting endpoints config controller
	* I0310 20:30:10.400362       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	* I0310 20:30:10.744322       1 shared_informer.go:230] Caches are synced for endpoints config 
	* I0310 20:30:11.023011       1 shared_informer.go:230] Caches are synced for service config 
	* I0310 20:30:52.180431       1 trace.go:116] Trace[1525376134]: "iptables restore" (started: 2021-03-10 20:30:50.1346472 +0000 UTC m=+50.324907801) (total time: 2.045639s):
	* Trace[1525376134]: [2.045639s] [2.045639s] END
	* 
	* ==> kube-scheduler [216ab7b1b423] <==
	* I0310 20:59:24.651857       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	* I0310 20:59:24.652004       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	* I0310 20:59:32.351531       1 serving.go:313] Generated self-signed cert in-memory
	* I0310 20:59:45.440615       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	* I0310 20:59:45.460621       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	* W0310 20:59:45.551666       1 authorization.go:47] Authorization is disabled
	* W0310 20:59:45.551925       1 authentication.go:40] Authentication is disabled
	* I0310 20:59:45.551974       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	* I0310 20:59:45.594452       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	* I0310 20:59:45.594681       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	* I0310 20:59:45.594962       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 20:59:45.594974       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 20:59:45.803586       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	* I0310 20:59:45.805646       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I0310 20:59:45.837950       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* I0310 20:59:45.838059       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	* I0310 20:59:46.470124       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
	* I0310 21:00:06.231371       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
	* E0310 21:12:57.642366       1 leaderelection.go:356] Failed to update lock: Put https://172.17.0.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s: context deadline exceeded
	* I0310 21:12:57.698902       1 leaderelection.go:277] failed to renew lease kube-system/kube-scheduler: timed out waiting for the condition
	* F0310 21:12:57.698998       1 server.go:244] leaderelection lost
	* 
	* ==> kube-scheduler [5451e73fc768] <==
	* Trace[1106410595]: [10.0567095s] [10.0567095s] END
	* E0310 20:26:26.288879       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://172.17.0.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 20:26:26.297742       1 trace.go:116] Trace[65798654]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2021-03-10 20:26:16.2439764 +0000 UTC m=+78.717883901) (total time: 10.0537094s):
	* Trace[65798654]: [10.0537094s] [10.0537094s] END
	* E0310 20:26:26.297772       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://172.17.0.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 20:26:26.384305       1 trace.go:116] Trace[133539219]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2021-03-10 20:26:16.3720894 +0000 UTC m=+78.845996801) (total time: 10.011827s):
	* Trace[133539219]: [10.011827s] [10.011827s] END
	* E0310 20:26:26.384332       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://172.17.0.2:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 20:26:26.391624       1 trace.go:116] Trace[1364504213]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:135 (started: 2021-03-10 20:26:16.3458281 +0000 UTC m=+78.819735501) (total time: 10.0457719s):
	* Trace[1364504213]: [10.0457719s] [10.0457719s] END
	* E0310 20:26:26.391650       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://172.17.0.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 20:26:26.407284       1 trace.go:116] Trace[420950245]: "Reflector ListAndWatch" name:k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233 (started: 2021-03-10 20:26:16.3344847 +0000 UTC m=+78.808392101) (total time: 10.0727549s):
	* Trace[420950245]: [10.0727549s] [10.0727549s] END
	* E0310 20:26:26.407302       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://172.17.0.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* E0310 20:26:28.220518       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:26:28.232556       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:26:28.232768       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:26:28.248032       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:26:28.588727       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* I0310 20:26:50.760416       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* I0310 20:27:14.654819       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
	* I0310 20:27:15.395821       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
	* E0310 20:41:27.394002       1 leaderelection.go:356] Failed to update lock: Put https://172.17.0.2:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: context deadline exceeded
	* I0310 20:41:27.394213       1 leaderelection.go:277] failed to renew lease kube-system/kube-scheduler: timed out waiting for the condition
	* F0310 20:41:27.515817       1 server.go:244] leaderelection lost
	* 
	* ==> kube-scheduler [91345ca66b20] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:17:01 UTC, end at Wed 2021-03-10 21:15:26 UTC. --
	* Mar 10 20:57:00 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 20:57:00.045360    6445 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/1e3bea546b99ba4344406186d96ca03c60d2fe38d275687fc8c0e258f27b9a39/diff" to get inode usage: stat /var/lib/docker/overlay2/1e3bea546b99ba4344406186d96ca03c60d2fe38d275687fc8c0e258f27b9a39/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/1b3822abb4bd44a4a28568f58b3d515cc36be32fa2615cdd746a2c31b5c4d3b8" to get inode usage: stat /var/lib/docker/containers/1b3822abb4bd44a4a28568f58b3d515cc36be32fa2615cdd746a2c31b5c4d3b8: no such file or directory
	* Mar 10 20:57:09 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 20:57:09.025828    6445 remote_runtime.go:295] ContainerStatus "1b3822abb4bd44a4a28568f58b3d515cc36be32fa2615cdd746a2c31b5c4d3b8" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 1b3822abb4bd44a4a28568f58b3d515cc36be32fa2615cdd746a2c31b5c4d3b8
	* Mar 10 20:57:09 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 20:57:09.032298    6445 kuberuntime_manager.go:952] getPodContainerStatuses for pod "kube-controller-manager-missing-upgrade-20210310201637-6496_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)" failed: rpc error: code = Unknown desc = Error: No such container: 1b3822abb4bd44a4a28568f58b3d515cc36be32fa2615cdd746a2c31b5c4d3b8
	* Mar 10 20:59:09 missing-upgrade-20210310201637-6496 kubelet[6445]: I0310 20:59:09.520589    6445 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 91345ca66b20337d4801d1fba1b04a7c1a2a7dbda7c7b1f9fd0806064c6bccdd
	* Mar 10 21:04:20 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:04:20.712012    6445 kubelet.go:1845] skipping pod synchronization - container runtime is down
	* Mar 10 21:04:20 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:04:20.813163    6445 kubelet.go:1845] skipping pod synchronization - container runtime is down
	* Mar 10 21:04:21 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:04:21.028238    6445 kubelet.go:1845] skipping pod synchronization - container runtime is down
	* Mar 10 21:04:21 missing-upgrade-20210310201637-6496 kubelet[6445]: I0310 21:04:21.401559    6445 setters.go:559] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2021-03-10 21:04:21.4009722 +0000 UTC m=+2241.980226701 LastTransitionTime:2021-03-10 21:04:21.4009722 +0000 UTC m=+2241.980226701 Reason:KubeletNotReady Message:container runtime is down}
	* Mar 10 21:04:21 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:04:21.433821    6445 kubelet.go:1845] skipping pod synchronization - container runtime is down
	* Mar 10 21:04:22 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:04:22.267644    6445 kubelet.go:1845] skipping pod synchronization - container runtime is down
	* Mar 10 21:04:23 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:04:23.868092    6445 kubelet.go:1845] skipping pod synchronization - container runtime is down
	* Mar 10 21:04:27 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:04:27.082852    6445 kubelet.go:1845] skipping pod synchronization - container runtime is down
	* Mar 10 21:04:32 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:04:32.114013    6445 kubelet.go:1845] skipping pod synchronization - container runtime is down
	* Mar 10 21:04:37 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:04:37.137375    6445 kubelet.go:1845] skipping pod synchronization - container runtime is down
	* Mar 10 21:04:42 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:04:42.235023    6445 kubelet.go:1845] skipping pod synchronization - container runtime is down
	* Mar 10 21:13:55 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:13:55.094302    6445 cadvisor_stats_provider.go:400] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/burstable/podc92479a2ea69d7c331c16a5105dd1b8c/984ddfe94ab9228d43622303c2aaa3a941b32ba3b517df16b4c54bf834324d9c": RecentStats: unable to find data in memory cache]
	* Mar 10 21:14:03 missing-upgrade-20210310201637-6496 kubelet[6445]: W0310 21:14:03.069835    6445 container.go:412] Failed to create summary reader for "/kubepods/burstable/podc92479a2ea69d7c331c16a5105dd1b8c/984ddfe94ab9228d43622303c2aaa3a941b32ba3b517df16b4c54bf834324d9c": none of the resources are being tracked.
	* Mar 10 21:14:03 missing-upgrade-20210310201637-6496 kubelet[6445]: W0310 21:14:03.220608    6445 container.go:412] Failed to create summary reader for "/kubepods/burstable/pod5795d0c442cb997ff93c49feeb9f6386/216ab7b1b423567b750e0d1eff65725cd90aaf874399bb9baa0bcb37d9807f2d": none of the resources are being tracked.
	* Mar 10 21:14:24 missing-upgrade-20210310201637-6496 kubelet[6445]: I0310 21:14:24.223216    6445 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 216ab7b1b423567b750e0d1eff65725cd90aaf874399bb9baa0bcb37d9807f2d
	* Mar 10 21:14:35 missing-upgrade-20210310201637-6496 kubelet[6445]: I0310 21:14:35.606208    6445 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 984ddfe94ab9228d43622303c2aaa3a941b32ba3b517df16b4c54bf834324d9c
	* Mar 10 21:14:43 missing-upgrade-20210310201637-6496 kubelet[6445]: I0310 21:14:43.740830    6445 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 91345ca66b20337d4801d1fba1b04a7c1a2a7dbda7c7b1f9fd0806064c6bccdd
	* Mar 10 21:15:04 missing-upgrade-20210310201637-6496 kubelet[6445]: I0310 21:15:04.936775    6445 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 91345ca66b20337d4801d1fba1b04a7c1a2a7dbda7c7b1f9fd0806064c6bccdd
	* Mar 10 21:15:05 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:15:05.077118    6445 remote_runtime.go:295] ContainerStatus "91345ca66b20337d4801d1fba1b04a7c1a2a7dbda7c7b1f9fd0806064c6bccdd" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 91345ca66b20337d4801d1fba1b04a7c1a2a7dbda7c7b1f9fd0806064c6bccdd
	* Mar 10 21:15:13 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:15:13.881706    6445 remote_runtime.go:295] ContainerStatus "984ddfe94ab9228d43622303c2aaa3a941b32ba3b517df16b4c54bf834324d9c" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 984ddfe94ab9228d43622303c2aaa3a941b32ba3b517df16b4c54bf834324d9c
	* Mar 10 21:15:13 missing-upgrade-20210310201637-6496 kubelet[6445]: E0310 21:15:13.881818    6445 kuberuntime_manager.go:952] getPodContainerStatuses for pod "kube-controller-manager-missing-upgrade-20210310201637-6496_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)" failed: rpc error: code = Unknown desc = Error: No such container: 984ddfe94ab9228d43622303c2aaa3a941b32ba3b517df16b4c54bf834324d9c
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                    |                  Profile                  |          User           | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p pause-20210310201637-6496              | pause-20210310201637-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:24 GMT | Wed, 10 Mar 2021 20:32:49 GMT |
	| -p      | offline-docker-20210310201637-6496        | offline-docker-20210310201637-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:04 GMT | Wed, 10 Mar 2021 20:33:57 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p                                        | offline-docker-20210310201637-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:34:20 GMT | Wed, 10 Mar 2021 20:34:47 GMT |
	|         | offline-docker-20210310201637-6496        |                                           |                         |         |                               |                               |
	| stop    | -p                                        | kubernetes-upgrade-20210310201637-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:39:52 GMT | Wed, 10 Mar 2021 20:40:10 GMT |
	|         | kubernetes-upgrade-20210310201637-6496    |                                           |                         |         |                               |                               |
	| start   | -p nospam-20210310201637-6496             | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:38 GMT | Wed, 10 Mar 2021 20:40:39 GMT |
	|         | -n=1 --memory=2250                        |                                           |                         |         |                               |                               |
	|         | --wait=false --driver=docker              |                                           |                         |         |                               |                               |
	| -p      | nospam-20210310201637-6496                | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:41:42 GMT | Wed, 10 Mar 2021 20:44:25 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p nospam-20210310201637-6496             | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:44:37 GMT | Wed, 10 Mar 2021 20:44:59 GMT |
	| -p      | docker-flags-20210310201637-6496          | docker-flags-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:47:18 GMT | Wed, 10 Mar 2021 20:49:03 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p                                        | docker-flags-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:21 GMT | Wed, 10 Mar 2021 20:49:47 GMT |
	|         | docker-flags-20210310201637-6496          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | force-systemd-env-20210310201637-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:41 GMT | Wed, 10 Mar 2021 20:50:17 GMT |
	|         | force-systemd-env-20210310201637-6496     |                                           |                         |         |                               |                               |
	| -p      | cert-options-20210310203249-6496          | cert-options-20210310203249-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:50:36 GMT | Wed, 10 Mar 2021 20:50:43 GMT |
	|         | ssh openssl x509 -text -noout -in         |                                           |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt     |                                           |                         |         |                               |                               |
	| delete  | -p                                        | cert-options-20210310203249-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:10 GMT | Wed, 10 Mar 2021 20:51:56 GMT |
	|         | cert-options-20210310203249-6496          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | disable-driver-mounts-20210310205156-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:57 GMT | Wed, 10 Mar 2021 20:52:02 GMT |
	|         | disable-driver-mounts-20210310205156-6496 |                                           |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210310203447-6496    | force-systemd-flag-20210310203447-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:53:03 GMT | Wed, 10 Mar 2021 20:53:44 GMT |
	|         | ssh docker info --format                  |                                           |                         |         |                               |                               |
	|         |                          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | force-systemd-flag-20210310203447-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:54:07 GMT | Wed, 10 Mar 2021 20:54:36 GMT |
	|         | force-systemd-flag-20210310203447-6496    |                                           |                         |         |                               |                               |
	| stop    | -p                                        | old-k8s-version-20210310204459-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:19 GMT | Wed, 10 Mar 2021 21:02:40 GMT |
	|         | old-k8s-version-20210310204459-6496       |                                           |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                    |                                           |                         |         |                               |                               |
	| addons  | enable dashboard -p                       | old-k8s-version-20210310204459-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:42 GMT | Wed, 10 Mar 2021 21:02:42 GMT |
	|         | old-k8s-version-20210310204459-6496       |                                           |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496           | embed-certs-20210310205017-6496           | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:07:05 GMT | Wed, 10 Mar 2021 21:08:33 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| start   | -p                                        | stopped-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:52:21 GMT | Wed, 10 Mar 2021 21:09:23 GMT |
	|         | stopped-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr           |                                           |                         |         |                               |                               |
	|         | -v=1 --driver=docker                      |                                           |                         |         |                               |                               |
	| logs    | -p                                        | stopped-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:09:23 GMT | Wed, 10 Mar 2021 21:10:51 GMT |
	|         | stopped-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	| delete  | -p                                        | stopped-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:10:52 GMT | Wed, 10 Mar 2021 21:11:13 GMT |
	|         | stopped-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	| delete  | -p                                        | running-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:45 GMT | Wed, 10 Mar 2021 21:12:11 GMT |
	|         | running-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	| stop    | -p                                        | embed-certs-20210310205017-6496           | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:12:38 GMT |
	|         | embed-certs-20210310205017-6496           |                                           |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                    |                                           |                         |         |                               |                               |
	| addons  | enable dashboard -p                       | embed-certs-20210310205017-6496           | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:40 GMT | Wed, 10 Mar 2021 21:12:41 GMT |
	|         | embed-certs-20210310205017-6496           |                                           |                         |         |                               |                               |
	| -p      | kubernetes-upgrade-20210310201637-6496    | kubernetes-upgrade-20210310201637-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:50 GMT | Wed, 10 Mar 2021 21:15:02 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	|---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 21:12:41
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 21:12:41.864466   18444 out.go:239] Setting OutFile to fd 2560 ...
	* I0310 21:12:41.865478   18444 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:12:41.865478   18444 out.go:252] Setting ErrFile to fd 1780...
	* I0310 21:12:41.865478   18444 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:12:41.876384   18444 out.go:246] Setting JSON to false
	* I0310 21:12:41.878392   18444 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36227,"bootTime":1615374534,"procs":118,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 21:12:41.879390   18444 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 21:12:41.883412   18444 out.go:129] * [embed-certs-20210310205017-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 21:12:38.444306   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:39.676364   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:40.812667   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:42.057695   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:41.886411   18444 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 21:12:41.897821   18444 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 21:12:42.439380   18444 docker.go:119] docker version: linux-20.10.2
	* I0310 21:12:42.446315   18444 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:12:43.723542   18444 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.277228s)
	* I0310 21:12:43.724888   18444 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:12:43.1218639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:12:43.728250   18444 out.go:129] * Using the docker driver based on existing profile
	* I0310 21:12:43.729044   18444 start.go:276] selected driver: docker
	* I0310 21:12:43.729044   18444 start.go:718] validating driver "docker" against &{Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:12:43.729311   18444 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 21:12:44.851133   18444 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:12:45.911277   18444 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0601454s)
	* I0310 21:12:45.912501   18444 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:12:45.4750355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:12:45.913355   18444 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 21:12:45.913484   18444 start_flags.go:398] config:
	* {Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:12:45.918095   18444 out.go:129] * Starting control plane node embed-certs-20210310205017-6496 in cluster embed-certs-20210310205017-6496
	* I0310 21:12:42.551198   22316 kic_runner.go:124] Done: [docker exec --privileged false-20210310211211-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.9829709s)
	* I0310 21:12:42.563519   22316 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa...
	* I0310 21:12:43.412980   22316 cli_runner.go:115] Run: docker container inspect false-20210310211211-6496 --format=
	* I0310 21:12:44.102425   22316 machine.go:88] provisioning docker machine ...
	* I0310 21:12:44.102912   22316 ubuntu.go:169] provisioning hostname "false-20210310211211-6496"
	* I0310 21:12:44.115201   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:12:44.767257   22316 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:12:44.768085   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	* I0310 21:12:44.768232   22316 main.go:121] libmachine: About to run SSH command:
	* sudo hostname false-20210310211211-6496 && echo "false-20210310211211-6496" | sudo tee /etc/hostname
	* I0310 21:12:44.780818   22316 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:12:46.600599   18444 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 21:12:46.600599   18444 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 21:12:46.601017   18444 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:12:46.601421   18444 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:12:46.601421   18444 cache.go:54] Caching tarball of preloaded images
	* I0310 21:12:46.601731   18444 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 21:12:46.601731   18444 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 21:12:46.602007   18444 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\config.json ...
	* I0310 21:12:46.616567   18444 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 21:12:46.622732   18444 start.go:313] acquiring machines lock for embed-certs-20210310205017-6496: {Name:mk5deb5478a17b664131b4c3205eef748b11179e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:12:46.624001   18444 start.go:317] acquired machines lock for "embed-certs-20210310205017-6496" in 300.2??s
	* I0310 21:12:46.624373   18444 start.go:93] Skipping create...Using existing machine configuration
	* I0310 21:12:46.624586   18444 fix.go:55] fixHost starting: 
	* I0310 21:12:46.639912   18444 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format=
	* I0310 21:12:47.310427   18444 fix.go:108] recreateIfNeeded on embed-certs-20210310205017-6496: state=Stopped err=<nil>
	* W0310 21:12:47.310427   18444 fix.go:134] unexpected machine state, will restart: <nil>
	* I0310 21:12:43.470722   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:44.606164   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:45.734123   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:46.804581   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:47.314638   18444 out.go:129] * Restarting existing docker container for "embed-certs-20210310205017-6496" ...
	* I0310 21:12:47.319764   18444 cli_runner.go:115] Run: docker start embed-certs-20210310205017-6496
	* I0310 21:12:50.315712   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: false-20210310211211-6496
	* 
	* I0310 21:12:50.336532   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:12:51.020432   22316 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:12:51.030067   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	* I0310 21:12:51.030067   22316 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\sfalse-20210310211211-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20210310211211-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 false-20210310211211-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:12:47.356289   18752 ssh_runner.go:189] Completed: docker images --format :: (25.8237496s)
	* I0310 21:12:47.357507   18752 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	* k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	* k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	* k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 21:12:47.357507   18752 docker.go:429] minikube-local-cache-test:functional-20210119220838-6552 wasn't preloaded
	* I0310 21:12:47.357507   18752 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210119220838-6552 minikube-local-cache-test:functional-20210128021318-232 minikube-local-cache-test:functional-20210304184021-4052 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210310191609-6496 minikube-local-cache-test:functional-20210106002159-6856 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210303214129-4588 minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210213143925-7440 minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functio
nal-20210126212539-5172 minikube-local-cache-test:functional-20210225231842-5736 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210114204234-6692 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210304002630-1156 minikube-local-cache-test:functional-20210308233820-5396 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210107190945-8748]
	* I0310 21:12:47.560647   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210304002630-1156
	* I0310 21:12:47.573296   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210309234032-4944
	* I0310 21:12:47.591639   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120175851-7432
	* I0310 21:12:47.593746   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210304184021-4052
	* I0310 21:12:47.623345   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	* I0310 21:12:47.639327   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210308233820-5396
	* I0310 21:12:47.639327   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210303214129-4588
	* I0310 21:12:47.657974   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210119220838-6552
	* I0310 21:12:47.674059   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210306072141-12056
	* I0310 21:12:47.697841   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210213143925-7440
	* I0310 21:12:47.728111   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210220004129-7452
	* I0310 21:12:47.747321   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	* I0310 21:12:47.784534   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210224014800-800
	* I0310 21:12:47.805452   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	* I0310 21:12:47.848672   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	* I0310 21:12:47.868695   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120214442-10992
	* I0310 21:12:47.870760   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210126212539-5172
	* I0310 21:12:47.902259   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210115191024-3516
	* I0310 21:12:47.906814   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	* I0310 21:12:47.923463   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210123004019-5372
	* I0310 21:12:47.928202   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120022529-1140
	* I0310 21:12:47.956178   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	* I0310 21:12:47.963166   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	* I0310 21:12:47.998199   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	* I0310 21:12:48.064967   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210219145454-9520
	* I0310 21:12:48.074644   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210225231842-5736
	* I0310 21:12:48.088452   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	* W0310 21:12:48.092246   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:12:48.108649   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	* I0310 21:12:48.112658   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120231122-7024
	* I0310 21:12:48.135760   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210310083645-5040
	* I0310 21:12:48.151248   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210114204234-6692
	* W0310 21:12:48.174652   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:12:48.195793   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210128021318-232
	* I0310 21:12:48.226996   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210212145109-352
	* I0310 21:12:48.245445   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210310191609-6496
	* I0310 21:12:48.257949   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210115023213-8464
	* I0310 21:12:48.267306   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	* W0310 21:12:48.277733   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:12:48.351732   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	* I0310 21:12:48.370641   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210219220622-3920
	* I0310 21:12:48.388773   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	* I0310 21:12:48.388773   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	* W0310 21:12:48.443070   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:12:48.451259   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.451715   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	* I0310 21:12:48.451913   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 21:12:48.451913   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 21:12:48.465810   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.465810   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	* I0310 21:12:48.465810   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 21:12:48.465810   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 21:12:48.473883   18752 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210301195830-5700
	* I0310 21:12:48.480057   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	* I0310 21:12:48.480057   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	* I0310 21:12:48.491637   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.492632   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	* I0310 21:12:48.492632   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:12:48.492632   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:12:48.519108   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* W0310 21:12:48.538349   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* W0310 21:12:48.555402   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* W0310 21:12:48.580997   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:12:48.610702   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.610702   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	* I0310 21:12:48.611057   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:12:48.611338   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:12:48.631430   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:12:48.660144   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.660348   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	* I0310 21:12:48.660668   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:12:48.660668   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:12:48.672287   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:12:48.681108   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.682325   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	* I0310 21:12:48.682769   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 21:12:48.682769   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 21:12:48.689153   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:12:48.689153   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	* I0310 21:12:48.689574   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:12:48.689574   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:12:48.701837   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:12:48.702845   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* W0310 21:12:51.262622   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.262622   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.262622   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: "minikube-local-cache-test:functional-20210120214442-10992" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.262622   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* I0310 21:12:51.263118   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.263118   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: "minikube-local-cache-test:functional-20210123004019-5372" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.263118   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: "minikube-local-cache-test:functional-20210120231122-7024" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.263118   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:12:51.263118   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* I0310 21:12:51.263118   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:12:51.263118   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* W0310 21:12:51.263344   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.263344   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: "minikube-local-cache-test:functional-20210224014800-800" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.263344   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* I0310 21:12:51.263608   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* W0310 21:12:51.263608   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.263773   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: NewSession: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.264568   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: "minikube-local-cache-test:functional-20210126212539-5172" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.262622   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.264031   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.264031   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.263118   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: "minikube-local-cache-test:functional-20210115023213-8464" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:12:51.264031   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.264395   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	* I0310 21:12:51.265396   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: "minikube-local-cache-test:functional-20210219220622-3920" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265396   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 21:12:51.265396   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 21:12:51.265396   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	* I0310 21:12:51.265689   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: "minikube-local-cache-test:functional-20210301195830-5700" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265800   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:12:51.265800   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:12:51.265800   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:12:51.266048   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:12:51.265396   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.265103   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:12:51.266741   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	* I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: "minikube-local-cache-test:functional-20210212145109-352" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: "minikube-local-cache-test:functional-20210128021318-232" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: "minikube-local-cache-test:functional-20210310191609-6496" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: "minikube-local-cache-test:functional-20210225231842-5736" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: "minikube-local-cache-test:functional-20210120022529-1140" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: "minikube-local-cache-test:functional-20210114204234-6692" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: "minikube-local-cache-test:functional-20210115191024-3516" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: "minikube-local-cache-test:functional-20210310083645-5040" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.265396   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.265103   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.267060   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:12:51.267060   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	* I0310 21:12:51.267060   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.267060   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 21:12:51.278344   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 21:12:51.267060   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 21:12:51.279628   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: "minikube-local-cache-test:functional-20210219145454-9520" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:12:51.279941   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 21:12:51.279941   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 21:12:51.267060   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 21:12:51.280346   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 21:12:51.280554   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	* I0310 21:12:51.282162   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 21:12:51.282162   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 21:12:51.266741   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:12:51.282481   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 21:12:51.282481   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 21:12:51.282680   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	* I0310 21:12:51.282680   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	* I0310 21:12:51.283252   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 21:12:51.282162   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 21:12:51.284541   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 21:12:51.282481   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:12:51.284736   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:12:51.282481   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 21:12:51.285901   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 21:12:51.469515   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	* I0310 21:12:51.484589   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:12:51.533483   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	* I0310 21:12:51.629820   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	* I0310 21:12:51.706342   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	* I0310 21:12:51.707710   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.708397   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.708896   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.732967   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.733363   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.733561   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.743681   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:12:51.745735   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.753348   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:12:51.755571   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	* I0310 21:12:51.756103   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	* I0310 21:12:51.762547   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:12:51.775210   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	* I0310 21:12:51.775919   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	* I0310 21:12:51.800100   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:12:51.810091   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	* I0310 21:12:51.813706   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	* I0310 21:12:51.813706   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.813706   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	* I0310 21:12:51.813706   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	* I0310 21:12:51.820771   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.827809   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.860588   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.868441   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.886285   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.908114   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.919653   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.945930   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.949437   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.952434   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.948834   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.953370   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:48.030376   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:49.199707   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:51.280979   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:54.762225   18444 cli_runner.go:168] Completed: docker start embed-certs-20210310205017-6496: (7.4424719s)
	* I0310 21:12:54.773282   18444 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format=
	* I0310 21:12:55.405890   18444 kic.go:410] container "embed-certs-20210310205017-6496" state is running.
	* I0310 21:12:55.438914   18444 cli_runner.go:115] Run: docker container inspect -f "" embed-certs-20210310205017-6496
	* I0310 21:12:56.104627   18444 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\config.json ...
	* I0310 21:12:56.120914   18444 machine.go:88] provisioning docker machine ...
	* I0310 21:12:56.121045   18444 ubuntu.go:169] provisioning hostname "embed-certs-20210310205017-6496"
	* I0310 21:12:56.131918   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:12:53.779092   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:12:53.780765   22316 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:12:53.780765   22316 ubuntu.go:177] setting up certificates
	* I0310 21:12:53.781192   22316 provision.go:83] configureAuth start
	* I0310 21:12:53.805343   22316 cli_runner.go:115] Run: docker container inspect -f "" false-20210310211211-6496
	* I0310 21:12:54.440272   22316 provision.go:137] copyHostCerts
	* I0310 21:12:54.440272   22316 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:12:54.440798   22316 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:12:54.441178   22316 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:12:54.445616   22316 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:12:54.445616   22316 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:12:54.446282   22316 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:12:54.449747   22316 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:12:54.449987   22316 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:12:54.450644   22316 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:12:54.455779   22316 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.false-20210310211211-6496 san=[172.17.0.8 127.0.0.1 localhost 127.0.0.1 minikube false-20210310211211-6496]
	* I0310 21:12:54.748978   22316 provision.go:165] copyRemoteCerts
	* I0310 21:12:54.766034   22316 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:12:54.774187   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:12:55.399153   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	* I0310 21:12:56.240137   22316 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4741049s)
	* I0310 21:12:56.240137   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:12:56.735057   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	* I0310 21:12:53.395822   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.662745s)
	* I0310 21:12:53.396719   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.485017   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6713142s)
	* I0310 21:12:53.485539   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.496519   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7629602s)
	* I0310 21:12:53.496793   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.512111   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6511935s)
	* I0310 21:12:53.512111   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.541713   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7209444s)
	* I0310 21:12:53.541952   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.590524   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8569656s)
	* I0310 21:12:53.590700   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.621272   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7934655s)
	* I0310 21:12:53.621687   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.642607   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.664234s)
	* I0310 21:12:53.642607   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.668750   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8003122s)
	* I0310 21:12:53.668963   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.688746   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.9810395s)
	* I0310 21:12:53.689097   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.758147   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8047789s)
	* I0310 21:12:53.758745   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.759520   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8398693s)
	* I0310 21:12:53.760251   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.770222   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8401715s)
	* I0310 21:12:53.770667   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.785284   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8990014s)
	* I0310 21:12:53.785776   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.817210   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8868726s)
	* I0310 21:12:53.817447   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.818574   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0728419s)
	* I0310 21:12:53.818919   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.843754   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.13536s)
	* I0310 21:12:53.844110   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.883965   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1750717s)
	* I0310 21:12:53.884398   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.958209   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0084306s)
	* I0310 21:12:53.959831   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0089094s)
	* I0310 21:12:53.959831   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.960273   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.966133   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0137022s)
	* I0310 21:12:53.968446   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.994078   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0407109s)
	* I0310 21:12:53.994078   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0316693s)
	* I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1018557s)
	* I0310 21:12:54.010638   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:54.010638   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:53.272827   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:56.598226   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:56.741795   18444 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:12:56.743328   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	* I0310 21:12:56.743564   18444 main.go:121] libmachine: About to run SSH command:
	* sudo hostname embed-certs-20210310205017-6496 && echo "embed-certs-20210310205017-6496" | sudo tee /etc/hostname
	* I0310 21:12:56.757510   18444 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:12:59.783456   18444 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:12:57.353881   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	* I0310 21:12:58.233074   22316 provision.go:86] duration metric: configureAuth took 4.4518884s
	* I0310 21:12:58.233074   22316 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:12:58.267262   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:12:59.006727   22316 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:12:59.007814   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	* I0310 21:12:59.007814   22316 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:13:00.406956   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:13:00.406956   22316 ubuntu.go:71] root file system type: overlay
	* I0310 21:13:00.407166   22316 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:13:00.416655   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:01.049891   22316 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:01.050764   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	* I0310 21:13:01.050764   22316 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:12:57.361413   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361612   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 21:12:57.361612   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361808   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361808   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 21:12:57.361612   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:12:57.361808   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:12:57.361808   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361808   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 21:12:57.361808   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 21:12:57.361808   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.362006   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:12:57.362006   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:12:57.361612   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:12:57.362270   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:12:57.361808   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:12:57.362625   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.363064   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:12:57.363064   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:12:57.361413   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.363337   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:12:57.361808   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 21:12:57.364314   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 21:12:57.364462   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 21:12:57.367708   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 21:12:57.466731   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:12:57.476748   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.519791   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	* I0310 21:12:57.540705   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.552653   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:12:57.560773   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:12:57.562999   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.563922   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:12:57.566901   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	* I0310 21:12:57.579023   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.579023   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:12:57.590099   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.591785   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	* I0310 21:12:57.592789   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 21:12:57.596786   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 21:12:57.597202   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.601772   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.608041   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.617147   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:57.625602   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:12:58.454521   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.501263   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0245158s)
	* I0310 21:12:58.501555   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.513022   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.563355   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0226521s)
	* I0310 21:12:58.563355   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.637870   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0294542s)
	* I0310 21:12:58.638141   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.653923   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0281021s)
	* I0310 21:12:58.653923   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.679155   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0890575s)
	* I0310 21:12:58.679155   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.686201   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0690557s)
	* I0310 21:12:58.687282   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1513309s)
	* I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1559017s)
	* I0310 21:12:58.753709   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:58.753709   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:12:57.839214   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:12:59.277594   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:00.804139   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:01.868486   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:02.798933   18444 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:13:02.963365   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:13:02.971839   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:03.615956   22316 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:03.617620   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	* I0310 21:13:03.617911   22316 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* W0310 21:13:04.744994   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:04.744994   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:04.745224   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	* I0310 21:13:04.758785   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* W0310 21:13:05.269849   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:05.269849   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:05.269849   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	* I0310 21:13:05.285833   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:05.377486   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:05.926683   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:03.762450   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:05.609769   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:07.699476   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:07.819194   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: embed-certs-20210310205017-6496
	* 
	* I0310 21:13:07.838772   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:08.438721   18444 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:08.439026   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	* I0310 21:13:08.439026   18444 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\sembed-certs-20210310205017-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210310205017-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 embed-certs-20210310205017-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:13:09.846781   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:13:09.847029   18444 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:13:09.847029   18444 ubuntu.go:177] setting up certificates
	* I0310 21:13:09.847029   18444 provision.go:83] configureAuth start
	* I0310 21:13:09.857215   18444 cli_runner.go:115] Run: docker container inspect -f "" embed-certs-20210310205017-6496
	* I0310 21:13:10.524620   18444 provision.go:137] copyHostCerts
	* I0310 21:13:10.525147   18444 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:13:10.525431   18444 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:13:10.525817   18444 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:13:10.531398   18444 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:13:10.531398   18444 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:13:10.531945   18444 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:13:10.538062   18444 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:13:10.538062   18444 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:13:10.539306   18444 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:13:10.542016   18444 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.embed-certs-20210310205017-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210310205017-6496]
	* I0310 21:13:10.734235   18444 provision.go:165] copyRemoteCerts
	* I0310 21:13:10.742139   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:13:10.751385   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:11.337830   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	* W0310 21:13:10.795233   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:09.215487   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:10.222082   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:11.664728   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:12.461526   18444 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.7193887s)
	* I0310 21:13:12.462290   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:13:13.480520   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	* I0310 21:13:14.824810   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 21:13:16.109429   18444 provision.go:86] duration metric: configureAuth took 6.2624095s
	* I0310 21:13:16.109429   18444 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:13:16.120584   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* W0310 21:13:15.085213   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:15.085485   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:15.085915   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	* I0310 21:13:15.101929   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:15.738383   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:14.018661   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:15.188555   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:16.713288   18444 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:16.714294   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	* I0310 21:13:16.714294   18444 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:13:18.734749   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:13:18.734749   18444 ubuntu.go:71] root file system type: overlay
	* I0310 21:13:18.735219   18444 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:13:18.742632   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:19.364241   18444 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:19.364241   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	* I0310 21:13:19.365793   18444 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:13:21.300423   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:13:21.308894   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* W0310 21:13:18.491146   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:18.491490   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:18.493152   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	* I0310 21:13:18.512811   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:19.150277   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* W0310 21:13:19.460446   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:19.460446   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:19.461481   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	* I0310 21:13:19.466948   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:20.136532   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:19.310940   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:20.542067   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:21.744034   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:21.934752   18444 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:13:21.935488   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	* I0310 21:13:21.935808   18444 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 21:13:24.022228   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:13:24.022499   18444 machine.go:91] provisioned docker machine in 27.9014928s
	* I0310 21:13:24.022499   18444 start.go:267] post-start starting for "embed-certs-20210310205017-6496" (driver="docker")
	* I0310 21:13:24.022499   18444 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:13:24.025597   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:13:24.039903   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:24.692780   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	* I0310 21:13:25.877491   18444 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8518959s)
	* I0310 21:13:25.894216   18444 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:13:26.002177   18444 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:13:26.002579   18444 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:13:26.002579   18444 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:13:26.002579   18444 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:13:26.002861   18444 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:13:26.003528   18444 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:13:26.007088   18444 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:13:26.007611   18444 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:13:26.020549   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:13:26.333222   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:13:25.623756   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 21:13:02.944722000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 21:13:25.624097   22316 machine.go:91] provisioned docker machine in 41.5217304s
	* I0310 21:13:25.624097   22316 client.go:171] LocalClient.Create took 1m8.7849166s
	* I0310 21:13:25.624097   22316 start.go:168] duration metric: libmachine.API.Create for "false-20210310211211-6496" took 1m8.7853333s
	* I0310 21:13:25.624097   22316 start.go:267] post-start starting for "false-20210310211211-6496" (driver="docker")
	* I0310 21:13:25.624097   22316 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:13:25.634099   22316 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:13:25.645133   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:26.319049   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	* W0310 21:13:22.320407   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:22.320807   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:22.321052   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	* I0310 21:13:22.321052   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* W0310 21:13:22.804709   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:22.804709   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:22.804709   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056 (4096 bytes)
	* I0310 21:13:22.814336   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:23.012763   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:23.412759   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* W0310 21:13:25.084062   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:25.084219   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:25.084994   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	* I0310 21:13:25.102916   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* W0310 21:13:25.205188   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:25.205188   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:25.205631   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	* W0310 21:13:25.207930   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:25.207930   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: NewSession: new client: new client: ssh: handshake failed: EOF
	* I0310 21:13:25.207930   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	* I0310 21:13:25.214321   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:25.217320   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	* I0310 21:13:25.840746   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:25.953687   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* I0310 21:13:25.974626   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	* W0310 21:13:26.192805   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* W0310 21:13:26.864421   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:27.427348   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:27.063958   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:13:27.872391   18444 start.go:270] post-start completed in 3.8495904s
	* I0310 21:13:27.886786   18444 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:13:27.894704   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:28.514187   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	* I0310 21:13:29.370055   18444 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4830372s)
	* I0310 21:13:29.370219   18444 fix.go:57] fixHost completed within 42.7459052s
	* I0310 21:13:29.370219   18444 start.go:80] releasing machines lock for "embed-certs-20210310205017-6496", held for 42.7462779s
	* I0310 21:13:29.388654   18444 cli_runner.go:115] Run: docker container inspect -f "" embed-certs-20210310205017-6496
	* I0310 21:13:30.092636   18444 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:13:30.096865   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:30.096865   18444 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:13:30.111612   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:13:30.809856   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	* I0310 21:13:30.868547   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	* I0310 21:13:26.965777   22316 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3316798s)
	* I0310 21:13:26.979571   22316 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:13:27.041034   22316 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:13:27.041595   22316 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:13:27.041595   22316 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:13:27.041595   22316 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:13:27.041595   22316 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:13:27.042315   22316 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:13:27.044801   22316 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:13:27.045903   22316 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:13:27.058807   22316 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:13:27.282537   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:13:27.708690   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:13:28.446877   22316 start.go:270] post-start completed in 2.8227839s
	* I0310 21:13:28.512700   22316 cli_runner.go:115] Run: docker container inspect -f "" false-20210310211211-6496
	* I0310 21:13:29.190137   22316 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\config.json ...
	* I0310 21:13:29.242845   22316 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:13:29.248980   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:29.940534   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	* I0310 21:13:30.418039   22316 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1751962s)
	* I0310 21:13:30.418512   22316 start.go:129] duration metric: createHost completed in 1m13.5837487s
	* I0310 21:13:30.418512   22316 start.go:80] releasing machines lock for "false-20210310211211-6496", held for 1m13.5848074s
	* I0310 21:13:30.429055   22316 cli_runner.go:115] Run: docker container inspect -f "" false-20210310211211-6496
	* I0310 21:13:31.056336   22316 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:13:31.069740   22316 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:13:31.070658   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:31.078759   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:31.725436   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	* I0310 21:13:31.782793   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	* W0310 21:13:30.436709   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* W0310 21:13:30.612292   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	* I0310 21:13:32.379163   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:33.006142   18444 ssh_runner.go:189] Completed: systemctl --version: (2.9092807s)
	* I0310 21:13:33.006142   18444 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.9135095s)
	* I0310 21:13:33.024806   18444 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:13:33.414467   18444 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:13:33.678401   18444 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:13:33.690511   18444 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:13:33.986977   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:13:34.796782   18444 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:13:35.098284   18444 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:13:32.747305   22316 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6907746s)
	* I0310 21:13:32.752679   22316 ssh_runner.go:189] Completed: systemctl --version: (1.6775676s)
	* I0310 21:13:32.761954   22316 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:13:33.006142   22316 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:13:33.313909   22316 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:13:33.324754   22316 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:13:33.500636   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:13:33.933231   22316 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:13:34.261595   22316 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:13:36.141117   22316 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.879256s)
	* I0310 21:13:36.146867   22316 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:13:36.334513   22316 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:13:37.715953   22316 ssh_runner.go:189] Completed: docker version --format : (1.3814424s)
	* I0310 21:13:33.678057   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:36.811079   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:38.914658   18444 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (3.8157861s)
	* I0310 21:13:38.923160   18444 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:13:37.720657   22316 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 21:13:37.728327   22316 cli_runner.go:115] Run: docker exec -t false-20210310211211-6496 dig +short host.docker.internal
	* I0310 21:13:39.192283   22316 cli_runner.go:168] Completed: docker exec -t false-20210310211211-6496 dig +short host.docker.internal: (1.4637323s)
	* I0310 21:13:39.192283   22316 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:13:39.207998   22316 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:13:39.285432   22316 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:13:39.488453   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20210310211211-6496
	* I0310 21:13:40.124884   22316 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\client.crt
	* I0310 21:13:40.135955   22316 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\client.key
	* I0310 21:13:40.135955   22316 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:13:40.135955   22316 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:13:40.149276   22316 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:13:40.871022   22316 docker.go:423] Got preloaded images: 
	* I0310 21:13:40.871022   22316 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	* I0310 21:13:40.887557   22316 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:13:41.059625   22316 ssh_runner.go:149] Run: which lz4
	* I0310 21:13:41.168560   22316 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 21:13:41.235262   22316 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 21:13:41.235667   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	* I0310 21:13:42.113288   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:42.865353   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m16.9669011s)
	* I0310 21:13:42.868563   19328 logs.go:122] Gathering logs for kube-apiserver [ba5aace99e81] ...
	* I0310 21:13:42.868563   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 ba5aace99e81"
	* I0310 21:13:43.191321   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:44.682198   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:45.813059   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:47.104739   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:48.481166   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:50.478535   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:52.097972   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:53.167550   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:54.660908   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:56.043628   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:57.390515   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:13:55.365896   12868 docker.go:388] Took 86.535985 seconds to copy over tarball
	* I0310 21:13:55.386715   12868 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	* I0310 21:13:58.421032   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:00.744278   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:01.907357   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:04.360578   18444 ssh_runner.go:189] Completed: sudo systemctl start docker: (25.4374535s)
	* I0310 21:14:04.377902   18444 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:14:06.781861   18444 ssh_runner.go:189] Completed: docker version --format : (2.4039621s)
	* I0310 21:14:02.672034   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 ba5aace99e81": (19.8034991s)
	* I0310 21:14:02.704779   19328 logs.go:122] Gathering logs for container status ...
	* I0310 21:14:02.704779   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 21:14:03.282501   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:04.519470   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:06.185863   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:07.609393   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:06.792543   18444 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 21:14:06.800238   18444 cli_runner.go:115] Run: docker exec -t embed-certs-20210310205017-6496 dig +short host.docker.internal
	* I0310 21:14:08.636285   18444 cli_runner.go:168] Completed: docker exec -t embed-certs-20210310205017-6496 dig +short host.docker.internal: (1.8358704s)
	* I0310 21:14:08.636451   18444 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:14:08.655564   18444 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:14:08.698946   18444 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:14:08.879534   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:14:09.472845   18444 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:14:09.473124   18444 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:14:09.480641   18444 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:14:11.047506   18444 ssh_runner.go:189] Completed: docker images --format :: (1.5668664s)
	* I0310 21:14:11.047979   18444 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* minikube-local-cache-test:functional-20210120214442-10992
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* busybox:1.28.4-glibc
	* 
	* -- /stdout --
	* I0310 21:14:11.047979   18444 docker.go:360] Images already preloaded, skipping extraction
	* I0310 21:14:11.054361   18444 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:14:08.728659   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:10.605359   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:11.775469   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:12.426908   18444 ssh_runner.go:189] Completed: docker images --format :: (1.3725485s)
	* I0310 21:14:12.426908   18444 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* minikube-local-cache-test:functional-20210120214442-10992
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* busybox:1.28.4-glibc
	* 
	* -- /stdout --
	* I0310 21:14:12.427322   18444 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 21:14:12.443022   18444 ssh_runner.go:149] Run: docker info --format 
	* I0310 21:14:15.489095   18444 ssh_runner.go:189] Completed: docker info --format : (3.0460767s)
	* I0310 21:14:15.489563   18444 cni.go:74] Creating CNI manager for ""
	* I0310 21:14:15.489563   18444 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 21:14:15.489563   18444 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 21:14:15.489563   18444 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.97 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210310205017-6496 NodeName:embed-certs-20210310205017-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.97 CgroupDriver:cgroupfs ClientCAF
ile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 21:14:15.490025   18444 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 192.168.49.97
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "embed-certs-20210310205017-6496"
	*   kubeletExtraArgs:
	*     node-ip: 192.168.49.97
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "192.168.49.97"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 21:14:15.490025   18444 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20210310205017-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.97
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	* I0310 21:14:15.500645   18444 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 21:14:15.750709   18444 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 21:14:15.760620   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 21:14:15.892420   18444 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	* I0310 21:14:16.255217   18444 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 21:14:16.515128   18444 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1866 bytes)
	* I0310 21:14:15.768923   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (13.0641632s)
	* I0310 21:14:15.769623   19328 logs.go:122] Gathering logs for kubelet ...
	* I0310 21:14:15.769623   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	* I0310 21:14:13.031040   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:14.854875   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:16.109061   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:16.936554   18444 ssh_runner.go:149] Run: grep 192.168.49.97	control-plane.minikube.internal$ /etc/hosts
	* I0310 21:14:17.051613   18444 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:14:17.249339   18444 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496 for IP: 192.168.49.97
	* I0310 21:14:17.250054   18444 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 21:14:17.250374   18444 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 21:14:17.251142   18444 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\client.key
	* I0310 21:14:17.251452   18444 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key.b6188fac
	* I0310 21:14:17.251761   18444 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.key
	* I0310 21:14:17.253727   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 21:14:17.254281   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.254457   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 21:14:17.254818   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.254818   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 21:14:17.255513   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.255694   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 21:14:17.255953   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.256240   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 21:14:17.256649   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.256874   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 21:14:17.257184   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.257184   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 21:14:17.257607   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.257607   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 21:14:17.258151   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.258281   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 21:14:17.258570   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.258570   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 21:14:17.259035   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.259035   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 21:14:17.259503   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.259503   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 21:14:17.260201   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.260416   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 21:14:17.260745   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.260745   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 21:14:17.261286   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.261365   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 21:14:17.261697   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.261972   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 21:14:17.262248   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.262524   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 21:14:17.262881   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.262881   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 21:14:17.262881   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.264568   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 21:14:17.264932   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.264932   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 21:14:17.265561   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.265909   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 21:14:17.265909   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.265909   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 21:14:17.277114   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.277114   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 21:14:17.277440   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.277830   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 21:14:17.278449   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.278674   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 21:14:17.278997   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.278997   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 21:14:17.279584   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.279584   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 21:14:17.280544   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.280544   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 21:14:17.281012   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.281450   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 21:14:17.282006   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.282585   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 21:14:17.283107   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.283514   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 21:14:17.284122   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.284334   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 21:14:17.284876   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.284876   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 21:14:17.285871   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.286115   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 21:14:17.286504   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.286730   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 21:14:17.286919   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.287264   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 21:14:17.288137   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.288137   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 21:14:17.288701   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.289113   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 21:14:17.289652   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.289905   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 21:14:17.290450   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 21:14:17.291084   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 21:14:17.291910   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 21:14:17.292477   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 21:14:17.294042   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 21:14:17.302799   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 21:14:17.611552   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	* I0310 21:14:18.027309   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 21:14:18.697813   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	* I0310 21:14:19.297438   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 21:14:20.100491   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 21:14:20.663291   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 21:14:21.082385   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 21:14:21.506044   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 21:14:19.601863   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (3.8322451s)
	* I0310 21:14:19.666624   19328 logs.go:122] Gathering logs for etcd [81a39b1bd4f1] ...
	* I0310 21:14:19.666624   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 81a39b1bd4f1"
	* I0310 21:14:18.375635   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:19.414942   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:22.540932   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:21.964278   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 21:14:22.582588   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 21:14:23.269984   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 21:14:23.804639   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 21:14:24.325048   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 21:14:25.179814   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 21:14:25.526443   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 21:14:25.846574   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 21:14:25.659175   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:27.265850   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	* I0310 21:14:26.544098   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 21:14:27.372232   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 21:14:28.020652   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 21:14:28.569180   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 21:14:29.187113   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 21:14:29.703522   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 21:14:29.940505   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 21:14:30.330901   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 21:14:30.761712   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 21:14:31.386556   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 21:14:30.236809   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": http2: server sent GOAWAY and closed the connection; LastStreamID=245, ErrCode=NO_ERROR, debug=""
	* I0310 21:14:30.518110   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:31.008764   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:31.515416   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:32.008526   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:32.511618   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:31.743984   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 21:14:32.308527   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 21:14:32.876666   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 21:14:33.393082   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 21:14:33.869324   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 21:14:34.519740   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 21:14:34.897144   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 21:14:35.399582   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 21:14:35.995819   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 21:14:36.971030   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 81a39b1bd4f1": (17.3044298s)
	* I0310 21:14:37.005997   19328 logs.go:122] Gathering logs for kube-scheduler [e63ae4a86183] ...
	* I0310 21:14:37.007012   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 e63ae4a86183"
	* I0310 21:14:33.006554   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:33.508835   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:34.006012   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:34.508808   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:35.008527   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:35.503061   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:36.012759   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:36.507550   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:37.022896   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:37.504077   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:36.567107   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 21:14:37.251245   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 21:14:37.644297   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 21:14:38.203137   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 21:14:38.693835   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 21:14:39.210694   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 21:14:39.971743   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 21:14:40.475216   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 21:14:41.320098   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 21:14:38.005885   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:38.516331   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:39.014402   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:39.512788   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:40.006823   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:40.506866   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:41.011998   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:41.501870   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:42.009052   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:42.508741   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:42.232973   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 21:14:42.908772   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 21:14:43.781716   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 21:14:44.696746   18444 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 21:14:45.390443   18444 ssh_runner.go:149] Run: openssl version
	* I0310 21:14:45.482437   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 21:14:45.651988   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 21:14:45.772783   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 21:14:45.778553   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 21:14:45.843480   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:46.174305   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 21:14:46.428570   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 21:14:43.012072   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:43.511188   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:44.009340   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:44.505571   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:45.007257   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:45.507791   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:46.009858   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:46.523931   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:47.010222   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:47.507744   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:46.576065   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 21:14:46.593310   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 21:14:46.698018   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:46.848356   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 21:14:47.115394   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 21:14:47.176091   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 21:14:47.198899   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 21:14:47.340380   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:47.513949   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 21:14:47.905928   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 21:14:47.951951   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 21:14:47.961932   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 21:14:48.057359   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:48.269099   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 21:14:48.532324   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:14:48.654474   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:14:48.663602   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:14:48.758655   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 21:14:49.078686   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 21:14:49.267316   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 21:14:49.349285   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 21:14:49.357390   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 21:14:49.519644   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:49.651422   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 21:14:49.804006   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 21:14:49.939077   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 21:14:49.948504   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 21:14:50.028179   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:50.158520   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 21:14:50.412166   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 21:14:50.462274   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 21:14:50.473068   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 21:14:50.538722   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:50.720604   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 21:14:50.888196   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 21:14:50.946236   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 21:14:50.957481   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 21:14:51.071753   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:51.194145   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 21:14:51.332505   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 21:14:51.426975   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 21:14:51.446515   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 21:14:52.195255   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 e63ae4a86183": (15.1882638s)
	* I0310 21:14:52.215689   19328 logs.go:122] Gathering logs for kube-controller-manager [f4f5dad286f7] ...
	* I0310 21:14:52.215689   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f4f5dad286f7"
	* I0310 21:14:48.005253   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:48.527681   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:49.007192   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:49.508468   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:50.009531   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:50.512473   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:51.020200   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:51.503133   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:52.006227   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:52.508045   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:51.614421   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:51.713063   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 21:14:51.995049   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 21:14:52.065898   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 21:14:52.075253   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 21:14:52.159413   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:52.288203   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 21:14:52.480443   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 21:14:52.576842   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 21:14:52.582610   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 21:14:52.655285   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:52.783089   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 21:14:53.084901   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 21:14:53.174625   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 21:14:53.191753   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 21:14:53.314555   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:53.501263   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 21:14:53.739028   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 21:14:53.886737   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 21:14:53.895999   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 21:14:53.999562   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:54.199190   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 21:14:54.407107   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 21:14:54.569537   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 21:14:54.578544   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 21:14:54.884058   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:55.136671   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 21:14:55.368519   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 21:14:55.472906   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 21:14:55.484842   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 21:14:55.577120   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:55.897875   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 21:14:56.171569   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 21:14:56.268744   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 21:14:56.287647   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 21:14:56.392836   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:53.008513   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:53.511441   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:54.014370   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:54.518560   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:55.021296   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:55.518974   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:56.009392   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:56.504675   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:57.009681   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:57.509160   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:14:56.624005   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 21:14:56.841062   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 21:14:56.942666   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 21:14:56.949144   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 21:14:57.118875   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:57.244198   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 21:14:57.313203   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 21:14:57.366950   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 21:14:57.372633   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 21:14:57.434836   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:57.572986   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 21:14:57.655427   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 21:14:57.706486   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 21:14:57.721801   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 21:14:57.790260   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:57.938162   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 21:14:58.054647   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 21:14:58.104137   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 21:14:58.111662   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 21:14:58.189212   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:58.263191   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 21:14:58.376771   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 21:14:58.425770   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 21:14:58.437632   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 21:14:58.616826   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:58.715770   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 21:14:58.834428   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 21:14:58.886671   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 21:14:58.899643   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 21:14:58.953679   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:59.059831   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 21:14:59.137108   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 21:14:59.192036   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 21:14:59.209924   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 21:14:59.320419   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:59.440461   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 21:14:59.513814   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 21:14:59.560093   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 21:14:59.570575   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 21:14:59.623821   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 21:14:59.712850   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 21:14:59.781824   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 21:14:59.823502   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 21:14:59.851978   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 21:14:59.975766   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:00.185448   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 21:15:00.328974   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 21:15:00.393268   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 21:15:00.402436   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 21:15:00.515467   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:00.720715   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 21:15:00.983291   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 21:15:01.092070   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 21:15:01.103938   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 21:15:01.263896   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:01.380016   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 21:14:57.278415   22316 docker.go:388] Took 76.118939 seconds to copy over tarball
	* I0310 21:14:57.286692   22316 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	* I0310 21:15:01.662667   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 21:15:01.703100   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 21:15:01.720489   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 21:15:01.836920   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:01.978976   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 21:15:02.169086   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 21:15:02.244306   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 21:15:02.254536   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 21:15:02.335163   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:02.516818   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 21:15:02.740532   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 21:15:02.806576   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 21:15:02.832314   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 21:15:02.962644   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:03.213543   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 21:15:03.343122   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 21:15:03.405176   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 21:15:03.418818   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 21:15:03.542282   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:03.718148   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 21:15:03.897455   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 21:15:03.970787   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 21:15:03.983703   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 21:15:04.066741   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:04.252461   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 21:15:04.370896   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 21:15:04.419182   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 21:15:04.428315   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 21:15:04.476459   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:04.590443   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 21:15:04.711283   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 21:15:04.768589   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 21:15:04.778887   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 21:15:04.843812   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:04.962085   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 21:15:05.108172   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 21:15:05.151516   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 21:15:05.165642   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 21:15:05.259601   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:05.357646   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 21:15:05.532844   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 21:15:05.586807   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 21:15:05.599176   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 21:15:05.672972   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:05.757884   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 21:15:05.853140   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 21:15:05.894233   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 21:15:05.904366   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 21:15:06.072308   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:06.139834   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 21:15:06.212457   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 21:15:06.281733   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 21:15:06.300681   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 21:15:06.368052   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:06.439828   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 21:15:03.509080   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 f4f5dad286f7": (11.293407s)
	* I0310 21:15:03.514067   19328 logs.go:122] Gathering logs for kube-controller-manager [5e2289334650] ...
	* I0310 21:15:03.514067   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 5e2289334650"
	* I0310 21:15:06.538605   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 21:15:06.564576   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 21:15:06.571485   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 21:15:06.618461   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 21:15:06.711357   18444 kubeadm.go:385] StartCluster: {Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:15:06.719861   18444 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:15:07.467654   18444 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 21:15:07.540086   18444 kubeadm.go:396] found existing configuration files, will attempt cluster restart
	* I0310 21:15:07.540086   18444 kubeadm.go:594] restartCluster start
	* I0310 21:15:07.549651   18444 ssh_runner.go:149] Run: sudo test -d /data/minikube
	* I0310 21:15:07.678179   18444 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 21:15:07.687092   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	* I0310 21:15:08.302335   18444 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210310205017-6496" does not appear in C:\Users\jenkins/.kube/config
	* I0310 21:15:08.303480   18444 kubeconfig.go:128] "embed-certs-20210310205017-6496" context is missing from C:\Users\jenkins/.kube/config - will repair!
	* I0310 21:15:08.305419   18444 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:15:08.349094   18444 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	* I0310 21:15:08.403289   18444 api_server.go:146] Checking apiserver status ...
	* I0310 21:15:08.413704   18444 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* W0310 21:15:08.635712   18444 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 21:15:08.635712   18444 kubeadm.go:573] needs reconfigure: apiserver in state Stopped
	* I0310 21:15:08.635712   18444 kubeadm.go:1042] stopping kube-system containers ...
	* I0310 21:15:08.642852   18444 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:15:09.443248   18444 docker.go:261] Stopping containers: [765eeaf3ce81 13e03f4b1775 6402c6e4e6d4 4913edb02239 233e14c5554f 996876ed91c1 aae206460c76 78c1a80b774c 55e5c1ff0487 4aeafe69b026 efd3086c1be7 6579ac6125a2 2f3e9943b267 208e864728a3 62844ce92fdb]
	* I0310 21:15:09.451211   18444 ssh_runner.go:149] Run: docker stop 765eeaf3ce81 13e03f4b1775 6402c6e4e6d4 4913edb02239 233e14c5554f 996876ed91c1 aae206460c76 78c1a80b774c 55e5c1ff0487 4aeafe69b026 efd3086c1be7 6579ac6125a2 2f3e9943b267 208e864728a3 62844ce92fdb
	* I0310 21:15:10.205925   18444 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	* I0310 21:15:10.363742   18444 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 21:15:10.474757   18444 kubeadm.go:153] found existing configuration files:
	* -rw------- 1 root root 5611 Mar 10 20:56 /etc/kubernetes/admin.conf
	* -rw------- 1 root root 5629 Mar 10 20:57 /etc/kubernetes/controller-manager.conf
	* -rw------- 1 root root 2063 Mar 10 21:00 /etc/kubernetes/kubelet.conf
	* -rw------- 1 root root 5581 Mar 10 20:57 /etc/kubernetes/scheduler.conf
	* 
	* I0310 21:15:10.484743   18444 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	* I0310 21:15:10.580321   18444 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	* I0310 21:15:10.651343   18444 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	* I0310 21:15:10.743280   18444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 21:15:10.755331   18444 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	* I0310 21:15:10.831756   18444 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	* I0310 21:15:10.899262   18444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 21:15:10.909871   18444 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	* I0310 21:15:10.979102   18444 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 21:15:11.167879   18444 kubeadm.go:670] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	* I0310 21:15:11.167879   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 21:15:16.107272   18444 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": (4.9393999s)
	* I0310 21:15:16.107479   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 21:15:16.262099   12868 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1m20.875134s)
	* I0310 21:15:16.262099   12868 ssh_runner.go:100] rm: /preloaded.tar.lz4
	* I0310 21:15:18.035742   12868 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:15:18.100028   12868 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	* I0310 21:15:18.332605   12868 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:15:19.403060   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 5e2289334650": (15.889015s)
	* I0310 21:15:19.424701   19328 logs.go:122] Gathering logs for Docker ...
	* I0310 21:15:19.424701   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	* I0310 21:15:21.427318   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (2.0026197s)
	* W0310 21:15:21.432957   19328 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	* stdout:
	* [init] Using Kubernetes version: v1.20.5-rc.0
	* [preflight] Running pre-flight checks
	* [preflight] Pulling images required for setting up a Kubernetes cluster
	* [preflight] This might take a minute or two, depending on the speed of your internet connection
	* [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	* [certs] Using certificateDir folder "/var/lib/minikube/certs"
	* [certs] Using existing ca certificate authority
	* [certs] Using existing apiserver certificate and key on disk
	* [certs] Using existing apiserver-kubelet-client certificate and key on disk
	* [certs] Using existing front-proxy-ca certificate authority
	* [certs] Using existing front-proxy-client certificate and key on disk
	* [certs] Using existing etcd/ca certificate authority
	* [certs] Using existing etcd/server certificate and key on disk
	* [certs] Using existing etcd/peer certificate and key on disk
	* [certs] Using existing etcd/healthcheck-client certificate and key on disk
	* [certs] Using existing apiserver-etcd-client certificate and key on disk
	* [certs] Using the existing "sa" key
	* [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	* [kubeconfig] Writing "admin.conf" kubeconfig file
	* [kubeconfig] Writing "kubelet.conf" kubeconfig file
	* [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	* [kubeconfig] Writing "scheduler.conf" kubeconfig file
	* [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	* [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	* [kubelet-start] Starting the kubelet
	* [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	* [control-plane] Creating static Pod manifest for "kube-apiserver"
	* [control-plane] Creating static Pod manifest for "kube-controller-manager"
	* [control-plane] Creating static Pod manifest for "kube-scheduler"
	* [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	* [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	* [kubelet-check] Initial timeout of 40s passed.
	* 
	* 	Unfortunately, an error has occurred:
	* 		timed out waiting for the condition
	* 
	* 	This error is likely caused by:
	* 		- The kubelet is not running
	* 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	* 
	* 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	* 		- 'systemctl status kubelet'
	* 		- 'journalctl -xeu kubelet'
	* 
	* 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	* 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	* 
	* 	Here is one example how you may list all Kubernetes containers running in docker:
	* 		- 'docker ps -a | grep kube | grep -v pause'
	* 		Once you have found the failing container, you can inspect its logs with:
	* 		- 'docker logs CONTAINERID'
	* 
	* 
	* stderr:
	* 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	* 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	* 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	* 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
	* 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	* error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	* To see the stack trace of this error execute with --v=5 or higher
	* W0310 21:15:21.432957   19328 out.go:191] * 
	* W0310 21:15:21.433321   19328 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	* stdout:
	* [init] Using Kubernetes version: v1.20.5-rc.0
	* [preflight] Running pre-flight checks
	* [preflight] Pulling images required for setting up a Kubernetes cluster
	* [preflight] This might take a minute or two, depending on the speed of your internet connection
	* [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	* [certs] Using certificateDir folder "/var/lib/minikube/certs"
	* [certs] Using existing ca certificate authority
	* [certs] Using existing apiserver certificate and key on disk
	* [certs] Using existing apiserver-kubelet-client certificate and key on disk
	* [certs] Using existing front-proxy-ca certificate authority
	* [certs] Using existing front-proxy-client certificate and key on disk
	* [certs] Using existing etcd/ca certificate authority
	* [certs] Using existing etcd/server certificate and key on disk
	* [certs] Using existing etcd/peer certificate and key on disk
	* [certs] Using existing etcd/healthcheck-client certificate and key on disk
	* [certs] Using existing apiserver-etcd-client certificate and key on disk
	* [certs] Using the existing "sa" key
	* [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	* [kubeconfig] Writing "admin.conf" kubeconfig file
	* [kubeconfig] Writing "kubelet.conf" kubeconfig file
	* [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	* [kubeconfig] Writing "scheduler.conf" kubeconfig file
	* [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	* [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	* [kubelet-start] Starting the kubelet
	* [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	* [control-plane] Creating static Pod manifest for "kube-apiserver"
	* [control-plane] Creating static Pod manifest for "kube-controller-manager"
	* [control-plane] Creating static Pod manifest for "kube-scheduler"
	* [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	* [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	* [kubelet-check] Initial timeout of 40s passed.
	* 
	* 	Unfortunately, an error has occurred:
	* 		timed out waiting for the condition
	* 
	* 	This error is likely caused by:
	* 		- The kubelet is not running
	* 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	* 
	* 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	* 		- 'systemctl status kubelet'
	* 		- 'journalctl -xeu kubelet'
	* 
	* 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	* 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	* 
	* 	Here is one example how you may list all Kubernetes containers running in docker:
	* 		- 'docker ps -a | grep kube | grep -v pause'
	* 		Once you have found the failing container, you can inspect its logs with:
	* 		- 'docker logs CONTAINERID'
	* 
	* 
	* stderr:
	* 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	* 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	* 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	* 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
	* 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	* error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	* To see the stack trace of this error execute with --v=5 or higher
	* 
	* W0310 21:15:21.442634   19328 out.go:191] * 
	* W0310 21:15:21.442634   19328 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue:
	* W0310 21:15:21.442634   19328 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	* I0310 21:15:21.458169   19328 out.go:129] 
	* W0310 21:15:21.458169   19328 out.go:191] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	* stdout:
	* [init] Using Kubernetes version: v1.20.5-rc.0
	* [preflight] Running pre-flight checks
	* [preflight] Pulling images required for setting up a Kubernetes cluster
	* [preflight] This might take a minute or two, depending on the speed of your internet connection
	* [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	* [certs] Using certificateDir folder "/var/lib/minikube/certs"
	* [certs] Using existing ca certificate authority
	* [certs] Using existing apiserver certificate and key on disk
	* [certs] Using existing apiserver-kubelet-client certificate and key on disk
	* [certs] Using existing front-proxy-ca certificate authority
	* [certs] Using existing front-proxy-client certificate and key on disk
	* [certs] Using existing etcd/ca certificate authority
	* [certs] Using existing etcd/server certificate and key on disk
	* [certs] Using existing etcd/peer certificate and key on disk
	* [certs] Using existing etcd/healthcheck-client certificate and key on disk
	* [certs] Using existing apiserver-etcd-client certificate and key on disk
	* [certs] Using the existing "sa" key
	* [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	* [kubeconfig] Writing "admin.conf" kubeconfig file
	* [kubeconfig] Writing "kubelet.conf" kubeconfig file
	* [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	* [kubeconfig] Writing "scheduler.conf" kubeconfig file
	* [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	* [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	* [kubelet-start] Starting the kubelet
	* [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	* [control-plane] Creating static Pod manifest for "kube-apiserver"
	* [control-plane] Creating static Pod manifest for "kube-controller-manager"
	* [control-plane] Creating static Pod manifest for "kube-scheduler"
	* [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	* [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	* [kubelet-check] Initial timeout of 40s passed.
	* 
	* 	Unfortunately, an error has occurred:
	* 		timed out waiting for the condition
	* 
	* 	This error is likely caused by:
	* 		- The kubelet is not running
	* 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	* 
	* 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	* 		- 'systemctl status kubelet'
	* 		- 'journalctl -xeu kubelet'
	* 
	* 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	* 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	* 
	* 	Here is one example how you may list all Kubernetes containers running in docker:
	* 		- 'docker ps -a | grep kube | grep -v pause'
	* 		Once you have found the failing container, you can inspect its logs with:
	* 		- 'docker logs CONTAINERID'
	* 
	* 
	* stderr:
	* 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	* 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	* 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	* 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
	* 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	* error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	* To see the stack trace of this error execute with --v=5 or higher
	* 
	* W0310 21:15:21.459378   19328 out.go:191] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* W0310 21:15:21.459378   19328 out.go:191] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* I0310 21:15:19.532325   12868 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.1989992s)
	* I0310 21:15:19.543213   12868 ssh_runner.go:149] Run: sudo systemctl restart docker
	* I0310 21:15:25.388458   18444 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (9.2809918s)
	* I0310 21:15:25.389697   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 21:15:30.220980   18444 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (4.8312897s)
	* I0310 21:15:30.220980   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 21:15:32.179001   12868 ssh_runner.go:189] Completed: sudo systemctl restart docker: (12.6358057s)
	* I0310 21:15:32.190196   12868 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:15:33.535616   12868 ssh_runner.go:189] Completed: docker images --format :: (1.3454221s)
	* I0310 21:15:33.535616   12868 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 21:15:33.535616   12868 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 21:15:33.554891   12868 ssh_runner.go:149] Run: docker info --format 
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:14:26.951952   16860 out.go:340] unable to execute * 2021-03-10 21:13:20.234353 W | etcdserver: request "header:<ID:13557092847739346247 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-missing-upgrade-20210310201637-6496.166b159a17f37f6c\" mod_revision:2179 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-missing-upgrade-20210310201637-6496.166b159a17f37f6c\" value_size:852 lease:4333720810884570366 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-controller-manager-missing-upgrade-20210310201637-6496.166b159a17f37f6c\" > >>" with result "size:16" took too long (131.6295ms) to execute
	: html/template:* 2021-03-10 21:13:20.234353 W | etcdserver: request "header:<ID:13557092847739346247 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-missing-upgrade-20210310201637-6496.166b159a17f37f6c\" mod_revision:2179 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-missing-upgrade-20210310201637-6496.166b159a17f37f6c\" value_size:852 lease:4333720810884570366 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-controller-manager-missing-upgrade-20210310201637-6496.166b159a17f37f6c\" > >>" with result "size:16" took too long (131.6295ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:14:26.982224   16860 out.go:340] unable to execute * 2021-03-10 21:13:31.774122 W | etcdserver: request "header:<ID:13557092847739346284 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.2\" mod_revision:3722 > success:<request_put:<key:\"/registry/masterleases/172.17.0.2\" value_size:65 lease:4333720810884570474 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.2\" > >>" with result "size:16" took too long (180.2983ms) to execute
	: html/template:* 2021-03-10 21:13:31.774122 W | etcdserver: request "header:<ID:13557092847739346284 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.2\" mod_revision:3722 > success:<request_put:<key:\"/registry/masterleases/172.17.0.2\" value_size:65 lease:4333720810884570474 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.2\" > >>" with result "size:16" took too long (180.2983ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:14:27.003092   16860 out.go:340] unable to execute * 2021-03-10 21:13:40.950551 W | etcdserver: request "header:<ID:13557092847739346307 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.2\" mod_revision:3728 > success:<request_put:<key:\"/registry/masterleases/172.17.0.2\" value_size:65 lease:4333720810884570497 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.2\" > >>" with result "size:16" took too long (132.6726ms) to execute
	: html/template:* 2021-03-10 21:13:40.950551 W | etcdserver: request "header:<ID:13557092847739346307 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.2\" mod_revision:3728 > success:<request_put:<key:\"/registry/masterleases/172.17.0.2\" value_size:65 lease:4333720810884570497 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.2\" > >>" with result "size:16" took too long (132.6726ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:14:27.014331   16860 out.go:340] unable to execute * 2021-03-10 21:13:51.698086 W | etcdserver: request "header:<ID:13557092847739346327 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3c24781dd07e1d96>" with result "size:40" took too long (902.9452ms) to execute
	: html/template:* 2021-03-10 21:13:51.698086 W | etcdserver: request "header:<ID:13557092847739346327 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3c24781dd07e1d96>" with result "size:40" took too long (902.9452ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:14:50.774440   16860 logs.go:183] command /bin/bash -c "docker logs --tail 25 984ddfe94ab9" failed with error: /bin/bash -c "docker logs --tail 25 984ddfe94ab9": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 984ddfe94ab9
	 output: "\n** stderr ** \nError: No such container: 984ddfe94ab9\n\n** /stderr **"
	E0310 21:15:25.212758   16860 logs.go:183] command /bin/bash -c "docker logs --tail 25 91345ca66b20" failed with error: /bin/bash -c "docker logs --tail 25 91345ca66b20": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 91345ca66b20
	 output: "\n** stderr ** \nError: No such container: 91345ca66b20\n\n** /stderr **"
	E0310 21:15:27.769462   16860 out.go:335] unable to parse "* I0310 21:12:42.446315   18444 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:12:42.446315   18444 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:15:27.777596   16860 out.go:335] unable to parse "* I0310 21:12:43.723542   18444 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.277228s)\n": template: * I0310 21:12:43.723542   18444 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.277228s)
	:1: function "json" not defined - returning raw string.
	E0310 21:15:27.807536   16860 out.go:335] unable to parse "* I0310 21:12:44.851133   18444 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:12:44.851133   18444 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:15:27.815505   16860 out.go:335] unable to parse "* I0310 21:12:45.911277   18444 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0601454s)\n": template: * I0310 21:12:45.911277   18444 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0601454s)
	:1: function "json" not defined - returning raw string.
	E0310 21:15:27.854584   16860 out.go:340] unable to execute * I0310 21:12:44.115201   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:12:44.115201   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:12:44.115201   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:27.875615   16860 out.go:335] unable to parse "* I0310 21:12:44.768085   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}\n": template: * I0310 21:12:44.768085   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:15:27.966235   16860 out.go:340] unable to execute * I0310 21:12:50.336532   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:12:50.336532   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:12:50.336532   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:27.977253   16860 out.go:335] unable to parse "* I0310 21:12:51.030067   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}\n": template: * I0310 21:12:51.030067   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:15:28.731581   16860 out.go:340] unable to execute * I0310 21:12:51.707710   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.707710   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.707710   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.737668   16860 out.go:340] unable to execute * I0310 21:12:51.708397   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.708397   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.708397   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.745481   16860 out.go:340] unable to execute * I0310 21:12:51.708896   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.708896   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.708896   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.752322   16860 out.go:340] unable to execute * I0310 21:12:51.732967   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.732967   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.732967   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.759249   16860 out.go:340] unable to execute * I0310 21:12:51.733363   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.733363   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.733363   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.769811   16860 out.go:340] unable to execute * I0310 21:12:51.733561   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.733561   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.733561   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.782621   16860 out.go:340] unable to execute * I0310 21:12:51.745735   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.745735   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.745735   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.815645   16860 out.go:340] unable to execute * I0310 21:12:51.813706   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.813706   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.813706   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.831526   16860 out.go:340] unable to execute * I0310 21:12:51.820771   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.820771   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.820771   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.864331   16860 out.go:340] unable to execute * I0310 21:12:51.827809   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.827809   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.827809   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.873182   16860 out.go:340] unable to execute * I0310 21:12:51.860588   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.860588   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.860588   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.878386   16860 out.go:340] unable to execute * I0310 21:12:51.868441   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.868441   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.868441   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.885174   16860 out.go:340] unable to execute * I0310 21:12:51.886285   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.886285   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.886285   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.890188   16860 out.go:340] unable to execute * I0310 21:12:51.908114   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.908114   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.908114   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.896193   16860 out.go:340] unable to execute * I0310 21:12:51.919653   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.919653   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.919653   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.903201   16860 out.go:340] unable to execute * I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.909186   16860 out.go:340] unable to execute * I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.915891   16860 out.go:340] unable to execute * I0310 21:12:51.945930   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.945930   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.945930   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.923008   16860 out.go:340] unable to execute * I0310 21:12:51.949437   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.949437   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.949437   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.929177   16860 out.go:340] unable to execute * I0310 21:12:51.952434   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.952434   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.952434   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.936007   16860 out.go:340] unable to execute * I0310 21:12:51.948834   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.948834   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.948834   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.942007   16860 out.go:340] unable to execute * I0310 21:12:51.953370   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.953370   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.953370   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.948997   16860 out.go:340] unable to execute * I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.957004   16860 out.go:340] unable to execute * I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:28.992488   16860 out.go:340] unable to execute * I0310 21:12:56.131918   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:12:56.131918   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:12:56.131918   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.068371   16860 out.go:340] unable to execute * I0310 21:12:54.774187   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:12:54.774187   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:12:54.774187   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.087957   16860 out.go:340] unable to execute * I0310 21:12:53.395822   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.662745s)
	: template: * I0310 21:12:53.395822   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.662745s)
	:1:102: executing "* I0310 21:12:53.395822   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.662745s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.106706   16860 out.go:340] unable to execute * I0310 21:12:53.485017   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6713142s)
	: template: * I0310 21:12:53.485017   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6713142s)
	:1:102: executing "* I0310 21:12:53.485017   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.6713142s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.118850   16860 out.go:340] unable to execute * I0310 21:12:53.496519   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7629602s)
	: template: * I0310 21:12:53.496519   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7629602s)
	:1:102: executing "* I0310 21:12:53.496519   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.7629602s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.136419   16860 out.go:340] unable to execute * I0310 21:12:53.512111   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6511935s)
	: template: * I0310 21:12:53.512111   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6511935s)
	:1:102: executing "* I0310 21:12:53.512111   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.6511935s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.150390   16860 out.go:340] unable to execute * I0310 21:12:53.541713   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7209444s)
	: template: * I0310 21:12:53.541713   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7209444s)
	:1:102: executing "* I0310 21:12:53.541713   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.7209444s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.159884   16860 out.go:340] unable to execute * I0310 21:12:53.590524   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8569656s)
	: template: * I0310 21:12:53.590524   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8569656s)
	:1:102: executing "* I0310 21:12:53.590524   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8569656s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.171018   16860 out.go:340] unable to execute * I0310 21:12:53.621272   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7934655s)
	: template: * I0310 21:12:53.621272   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7934655s)
	:1:102: executing "* I0310 21:12:53.621272   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.7934655s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.183445   16860 out.go:340] unable to execute * I0310 21:12:53.642607   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.664234s)
	: template: * I0310 21:12:53.642607   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.664234s)
	:1:102: executing "* I0310 21:12:53.642607   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.664234s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.192593   16860 out.go:340] unable to execute * I0310 21:12:53.668750   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8003122s)
	: template: * I0310 21:12:53.668750   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8003122s)
	:1:102: executing "* I0310 21:12:53.668750   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8003122s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.201574   16860 out.go:340] unable to execute * I0310 21:12:53.688746   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.9810395s)
	: template: * I0310 21:12:53.688746   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.9810395s)
	:1:102: executing "* I0310 21:12:53.688746   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.9810395s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.211354   16860 out.go:340] unable to execute * I0310 21:12:53.758147   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8047789s)
	: template: * I0310 21:12:53.758147   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8047789s)
	:1:102: executing "* I0310 21:12:53.758147   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8047789s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.223352   16860 out.go:340] unable to execute * I0310 21:12:53.759520   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8398693s)
	: template: * I0310 21:12:53.759520   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8398693s)
	:1:102: executing "* I0310 21:12:53.759520   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8398693s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.235910   16860 out.go:340] unable to execute * I0310 21:12:53.770222   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8401715s)
	: template: * I0310 21:12:53.770222   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8401715s)
	:1:102: executing "* I0310 21:12:53.770222   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8401715s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.243912   16860 out.go:340] unable to execute * I0310 21:12:53.785284   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8990014s)
	: template: * I0310 21:12:53.785284   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8990014s)
	:1:102: executing "* I0310 21:12:53.785284   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8990014s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.255919   16860 out.go:340] unable to execute * I0310 21:12:53.817210   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8868726s)
	: template: * I0310 21:12:53.817210   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8868726s)
	:1:102: executing "* I0310 21:12:53.817210   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.8868726s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.264917   16860 out.go:340] unable to execute * I0310 21:12:53.818574   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0728419s)
	: template: * I0310 21:12:53.818574   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0728419s)
	:1:102: executing "* I0310 21:12:53.818574   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0728419s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.286253   16860 out.go:340] unable to execute * I0310 21:12:53.843754   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.13536s)
	: template: * I0310 21:12:53.843754   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.13536s)
	:1:102: executing "* I0310 21:12:53.843754   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.13536s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.300457   16860 out.go:340] unable to execute * I0310 21:12:53.883965   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1750717s)
	: template: * I0310 21:12:53.883965   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1750717s)
	:1:102: executing "* I0310 21:12:53.883965   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.1750717s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.319576   16860 out.go:340] unable to execute * I0310 21:12:53.958209   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0084306s)
	: template: * I0310 21:12:53.958209   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0084306s)
	:1:102: executing "* I0310 21:12:53.958209   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0084306s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.326587   16860 out.go:340] unable to execute * I0310 21:12:53.959831   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0089094s)
	: template: * I0310 21:12:53.959831   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0089094s)
	:1:102: executing "* I0310 21:12:53.959831   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0089094s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.338593   16860 out.go:340] unable to execute * I0310 21:12:53.966133   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0137022s)
	: template: * I0310 21:12:53.966133   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0137022s)
	:1:102: executing "* I0310 21:12:53.966133   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0137022s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.348591   16860 out.go:340] unable to execute * I0310 21:12:53.994078   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0407109s)
	: template: * I0310 21:12:53.994078   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0407109s)
	:1:102: executing "* I0310 21:12:53.994078   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0407109s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.356597   16860 out.go:340] unable to execute * I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0316693s)
	: template: * I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0316693s)
	:1:102: executing "* I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.0316693s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.362610   16860 out.go:340] unable to execute * I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1018557s)
	: template: * I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1018557s)
	:1:102: executing "* I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (2.1018557s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.380611   16860 out.go:335] unable to parse "* I0310 21:12:56.743328   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}\n": template: * I0310 21:12:56.743328   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:15:29.405713   16860 out.go:340] unable to execute * I0310 21:12:58.267262   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:12:58.267262   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:12:58.267262   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.413595   16860 out.go:335] unable to parse "* I0310 21:12:59.007814   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}\n": template: * I0310 21:12:59.007814   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:15:29.462503   16860 out.go:340] unable to execute * I0310 21:13:00.416655   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:00.416655   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:00.416655   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.471351   16860 out.go:335] unable to parse "* I0310 21:13:01.050764   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}\n": template: * I0310 21:13:01.050764   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:15:29.799348   16860 out.go:340] unable to execute * I0310 21:12:57.476748   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.476748   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.476748   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.813437   16860 out.go:340] unable to execute * I0310 21:12:57.540705   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.540705   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.540705   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.825297   16860 out.go:340] unable to execute * I0310 21:12:57.562999   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.562999   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.562999   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.839203   16860 out.go:340] unable to execute * I0310 21:12:57.579023   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.579023   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.579023   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.855173   16860 out.go:340] unable to execute * I0310 21:12:57.590099   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.590099   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.590099   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.872156   16860 out.go:340] unable to execute * I0310 21:12:57.597202   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.597202   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.597202   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.879115   16860 out.go:340] unable to execute * I0310 21:12:57.601772   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.601772   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.601772   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.886119   16860 out.go:340] unable to execute * I0310 21:12:57.608041   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.608041   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.608041   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.892166   16860 out.go:340] unable to execute * I0310 21:12:57.617147   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.617147   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.617147   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.899126   16860 out.go:340] unable to execute * I0310 21:12:57.625602   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:12:57.625602   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:12:57.625602   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.915363   16860 out.go:340] unable to execute * I0310 21:12:58.501263   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0245158s)
	: template: * I0310 21:12:58.501263   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0245158s)
	:1:102: executing "* I0310 21:12:58.501263   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0245158s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.930489   16860 out.go:340] unable to execute * I0310 21:12:58.563355   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0226521s)
	: template: * I0310 21:12:58.563355   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0226521s)
	:1:102: executing "* I0310 21:12:58.563355   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0226521s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.945008   16860 out.go:340] unable to execute * I0310 21:12:58.637870   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0294542s)
	: template: * I0310 21:12:58.637870   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0294542s)
	:1:102: executing "* I0310 21:12:58.637870   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0294542s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.961873   16860 out.go:340] unable to execute * I0310 21:12:58.653923   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0281021s)
	: template: * I0310 21:12:58.653923   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0281021s)
	:1:102: executing "* I0310 21:12:58.653923   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0281021s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.978522   16860 out.go:340] unable to execute * I0310 21:12:58.679155   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0890575s)
	: template: * I0310 21:12:58.679155   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0890575s)
	:1:102: executing "* I0310 21:12:58.679155   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0890575s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:29.998382   16860 out.go:340] unable to execute * I0310 21:12:58.686201   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0690557s)
	: template: * I0310 21:12:58.686201   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0690557s)
	:1:102: executing "* I0310 21:12:58.686201   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.0690557s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.016006   16860 out.go:340] unable to execute * I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1513309s)
	: template: * I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1513309s)
	:1:102: executing "* I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.1513309s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.024084   16860 out.go:340] unable to execute * I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1559017s)
	: template: * I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1559017s)
	:1:102: executing "* I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496: (1.1559017s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.276799   16860 out.go:340] unable to execute * I0310 21:13:02.971839   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:02.971839   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:02.971839   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.285801   16860 out.go:335] unable to parse "* I0310 21:13:03.617620   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}\n": template: * I0310 21:13:03.617620   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:15:30.304794   16860 out.go:340] unable to execute * I0310 21:13:04.758785   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:04.758785   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:04.758785   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.335058   16860 out.go:340] unable to execute * I0310 21:13:05.285833   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:05.285833   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:05.285833   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.364021   16860 out.go:340] unable to execute * I0310 21:13:07.838772   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:07.838772   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:07.838772   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.375619   16860 out.go:335] unable to parse "* I0310 21:13:08.439026   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}\n": template: * I0310 21:13:08.439026   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:15:30.471351   16860 out.go:340] unable to execute * I0310 21:13:10.751385   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:10.751385   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:10.751385   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.523415   16860 out.go:340] unable to execute * I0310 21:13:16.120584   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:16.120584   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:16.120584   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.538366   16860 out.go:340] unable to execute * I0310 21:13:15.101929   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:15.101929   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:15.101929   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.560935   16860 out.go:335] unable to parse "* I0310 21:13:16.714294   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}\n": template: * I0310 21:13:16.714294   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:15:30.609463   16860 out.go:340] unable to execute * I0310 21:13:18.742632   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:18.742632   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:18.742632   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:30.632295   16860 out.go:335] unable to parse "* I0310 21:13:19.364241   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}\n": template: * I0310 21:13:19.364241   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:15:31.005305   16860 out.go:340] unable to execute * I0310 21:13:21.308894   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:21.308894   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:21.308894   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.021636   16860 out.go:340] unable to execute * I0310 21:13:18.512811   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:18.512811   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:18.512811   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.039636   16860 out.go:340] unable to execute * I0310 21:13:19.466948   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:19.466948   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:19.466948   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.060728   16860 out.go:335] unable to parse "* I0310 21:13:21.935488   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}\n": template: * I0310 21:13:21.935488   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:15:31.099261   16860 out.go:340] unable to execute * I0310 21:13:24.039903   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:24.039903   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:24.039903   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.527443   16860 out.go:340] unable to execute * I0310 21:13:25.645133   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:25.645133   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:25.645133   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.564405   16860 out.go:340] unable to execute * I0310 21:13:22.321052   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:22.321052   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:22.321052   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.586324   16860 out.go:340] unable to execute * I0310 21:13:22.814336   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:22.814336   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:22.814336   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.650044   16860 out.go:340] unable to execute * I0310 21:13:25.102916   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:25.102916   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:25.102916   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.683127   16860 out.go:340] unable to execute * I0310 21:13:25.214321   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:25.214321   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:25.214321   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.691487   16860 out.go:340] unable to execute * I0310 21:13:25.217320   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	: template: * I0310 21:13:25.217320   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	:1:96: executing "* I0310 21:13:25.217320   18752 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" newest-cni-20210310205436-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.725725   16860 out.go:340] unable to execute * I0310 21:13:27.894704   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:27.894704   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:27.894704   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.771301   16860 out.go:340] unable to execute * I0310 21:13:30.096865   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:30.096865   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:30.096865   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.784001   16860 out.go:340] unable to execute * I0310 21:13:30.111612   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:13:30.111612   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:13:30.111612   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.873944   16860 out.go:340] unable to execute * I0310 21:13:29.248980   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:29.248980   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:29.248980   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.897469   16860 out.go:340] unable to execute * I0310 21:13:31.070658   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:31.070658   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:31.070658   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:31.904268   16860 out.go:340] unable to execute * I0310 21:13:31.078759   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:31.078759   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:31.078759   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:32.095349   16860 out.go:340] unable to execute * I0310 21:13:39.488453   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20210310211211-6496
	: template: * I0310 21:13:39.488453   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20210310211211-6496
	:1:96: executing "* I0310 21:13:39.488453   22316 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" false-20210310211211-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:32.295257   16860 out.go:340] unable to execute * I0310 21:14:08.879534   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:14:08.879534   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:14:08.879534   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:15:34.514334   16860 out.go:340] unable to execute * I0310 21:15:07.687092   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	: template: * I0310 21:15:07.687092   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	:1:96: executing "* I0310 21:15:07.687092   18444 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" embed-certs-20210310205017-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	! unable to fetch logs for: kube-controller-manager [984ddfe94ab9], kube-scheduler [91345ca66b20]

                                                
                                                
** /stderr **
helpers_test.go:245: failed logs error: exit status 110
helpers_test.go:171: Cleaning up "missing-upgrade-20210310201637-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20210310201637-6496

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20210310201637-6496: (25.8137319s)
--- FAIL: TestMissingContainerUpgrade (3566.06s)

                                                
                                    
x
+
TestPause/serial/Start (946.41s)

                                                
                                                
=== RUN   TestPause/serial/Start

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:75: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20210310201637-6496 --memory=1800 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:75: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-20210310201637-6496 --memory=1800 --install-addons=false --wait=all --driver=docker: exit status 1 (15m0.0499936s)

                                                
                                                
-- stdout --
	* [pause-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	
	
	* Starting control plane node pause-20210310201637-6496 in cluster pause-20210310201637-6496
	* Creating docker container (CPUs=2, Memory=1800MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	* Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	* Documentation: https://docs.docker.com/docker-for-windows/#resources
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts

                                                
                                                
** /stderr **
pause_test.go:77: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p pause-20210310201637-6496 --memory=1800 --install-addons=false --wait=all --driver=docker" : exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect pause-20210310201637-6496

                                                
                                                
=== CONT  TestPause/serial/Start
helpers_test.go:231: (dbg) docker inspect pause-20210310201637-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3d2173df4161f72d071bcb212b249b3c1bed8390f1d5fca127659942324a92c",
	        "Created": "2021-03-10T20:16:56.485191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 124290,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:17:02.3667519Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/b3d2173df4161f72d071bcb212b249b3c1bed8390f1d5fca127659942324a92c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3d2173df4161f72d071bcb212b249b3c1bed8390f1d5fca127659942324a92c/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3d2173df4161f72d071bcb212b249b3c1bed8390f1d5fca127659942324a92c/hosts",
	        "LogPath": "/var/lib/docker/containers/b3d2173df4161f72d071bcb212b249b3c1bed8390f1d5fca127659942324a92c/b3d2173df4161f72d071bcb212b249b3c1bed8390f1d5fca127659942324a92c-json.log",
	        "Name": "/pause-20210310201637-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210310201637-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 1887436800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 1887436800,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c15e405d3cb0232138b12d5ed24b0b0db759d7b81dd91f482060150ed9ae185a-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c15e405d3cb0232138b12d5ed24b0b0db759d7b81dd91f482060150ed9ae185a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c15e405d3cb0232138b12d5ed24b0b0db759d7b81dd91f482060150ed9ae185a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c15e405d3cb0232138b12d5ed24b0b0db759d7b81dd91f482060150ed9ae185a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210310201637-6496",
	                "Source": "/var/lib/docker/volumes/pause-20210310201637-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210310201637-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210310201637-6496",
	                "name.minikube.sigs.k8s.io": "pause-20210310201637-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d993731d1184a792f0c89f9bb48b42224fc9307aa27cf8a5d409d4c4b9f73eaa",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55093"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d993731d1184",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "36dff79ac73a2737920f6c2d137a65a3ba9c80695dcd6f14aa08475de6fb5ac6",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.5",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:05",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "36dff79ac73a2737920f6c2d137a65a3ba9c80695dcd6f14aa08475de6fb5ac6",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.5",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:05",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --

                                                
                                                
=== CONT  TestPause/serial/Start
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210310201637-6496 -n pause-20210310201637-6496

                                                
                                                
=== CONT  TestPause/serial/Start
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20210310201637-6496 -n pause-20210310201637-6496: exit status 3 (45.491861s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:32:23.952511    7600 status.go:363] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E0310 20:32:23.952511    7600 status.go:235] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "pause-20210310201637-6496" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (946.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (1025.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20210310204459-6496 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20210310204459-6496 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0: exit status 80 (16m57.6939208s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20210310204459-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	* Starting control plane node old-k8s-version-20210310204459-6496 in cluster old-k8s-version-20210310204459-6496
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.14.0 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  - Mar 10 21:00:00 old-k8s-version-20210310204459-6496 kubelet[3941]: E0310 21:00:00.391641    3941 pod_workers.go:190] Error syncing pod 3a9cb0607c644e32b5d6d0cd9bcdb263 ("kube-controller-manager-old-k8s-version-20210310204459-6496_kube-system(3a9cb0607c644e32b5d6d0cd9bcdb263)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-20210310204459-6496_kube-system(3a9cb0607c644e32b5d6d0cd9bcdb263)"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:45:00.255205   12928 out.go:239] Setting OutFile to fd 1756 ...
	I0310 20:45:00.257201   12928 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:45:00.257201   12928 out.go:252] Setting ErrFile to fd 1704...
	I0310 20:45:00.257201   12928 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:45:00.274317   12928 out.go:246] Setting JSON to false
	I0310 20:45:00.277206   12928 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":34566,"bootTime":1615374534,"procs":122,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:45:00.277206   12928 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:45:00.282208   12928 out.go:129] * [old-k8s-version-20210310204459-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:45:00.284207   12928 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:45:00.289221   12928 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:45:00.824356   12928 docker.go:119] docker version: linux-20.10.2
	I0310 20:45:00.836917   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:45:01.866558   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0296417s)
	I0310 20:45:01.869067   12928 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:true NGoroutines:91 SystemTime:2021-03-10 20:45:01.3861684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:45:01.873618   12928 out.go:129] * Using the docker driver based on user configuration
	I0310 20:45:01.873897   12928 start.go:276] selected driver: docker
	I0310 20:45:01.873897   12928 start.go:718] validating driver "docker" against <nil>
	I0310 20:45:01.873897   12928 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:45:03.020868   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:45:04.040267   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0194001s)
	I0310 20:45:04.040939   12928 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:107 OomKillDisable:true NGoroutines:91 SystemTime:2021-03-10 20:45:03.5743498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:45:04.041630   12928 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 20:45:04.041988   12928 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 20:45:04.042380   12928 cni.go:74] Creating CNI manager for ""
	I0310 20:45:04.042380   12928 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:45:04.042380   12928 start_flags.go:398] config:
	{Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:45:04.046348   12928 out.go:129] * Starting control plane node old-k8s-version-20210310204459-6496 in cluster old-k8s-version-20210310204459-6496
	I0310 20:45:04.711858   12928 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:45:04.711858   12928 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:45:04.712630   12928 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0310 20:45:04.712910   12928 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0310 20:45:04.713093   12928 cache.go:54] Caching tarball of preloaded images
	I0310 20:45:04.713093   12928 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 20:45:04.713312   12928 cache.go:57] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0310 20:45:04.713312   12928 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json ...
	I0310 20:45:04.713879   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json: {Name:mkb0c21784bf43313016b1fffce280513139bf15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:45:04.728555   12928 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:45:04.729352   12928 start.go:313] acquiring machines lock for old-k8s-version-20210310204459-6496: {Name:mk75b6b2b8c7e9551ee9b4fdfdcee0e639bfef0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:45:04.729624   12928 start.go:317] acquired machines lock for "old-k8s-version-20210310204459-6496" in 271.7??s
	I0310 20:45:04.730175   12928 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0310 20:45:04.730381   12928 start.go:126] createHost starting for "" (driver="docker")
	I0310 20:45:04.732579   12928 out.go:150] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0310 20:45:04.733599   12928 start.go:160] libmachine.API.Create for "old-k8s-version-20210310204459-6496" (driver="docker")
	I0310 20:45:04.733599   12928 client.go:168] LocalClient.Create starting
	I0310 20:45:04.734591   12928 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 20:45:04.734591   12928 main.go:121] libmachine: Decoding PEM data...
	I0310 20:45:04.734591   12928 main.go:121] libmachine: Parsing certificate...
	I0310 20:45:04.734591   12928 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 20:45:04.735594   12928 main.go:121] libmachine: Decoding PEM data...
	I0310 20:45:04.735594   12928 main.go:121] libmachine: Parsing certificate...
	I0310 20:45:04.764196   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 20:45:05.343119   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 20:45:05.347432   12928 network_create.go:240] running [docker network inspect old-k8s-version-20210310204459-6496] to gather additional debugging logs...
	I0310 20:45:05.347432   12928 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210310204459-6496
	W0310 20:45:05.940304   12928 cli_runner.go:162] docker network inspect old-k8s-version-20210310204459-6496 returned with exit code 1
	I0310 20:45:05.940304   12928 network_create.go:243] error running [docker network inspect old-k8s-version-20210310204459-6496]: docker network inspect old-k8s-version-20210310204459-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20210310204459-6496
	I0310 20:45:05.940304   12928 network_create.go:245] output of [docker network inspect old-k8s-version-20210310204459-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20210310204459-6496
	
	** /stderr **
	I0310 20:45:05.948873   12928 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:45:06.646529   12928 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 20:45:06.647278   12928 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: old-k8s-version-20210310204459-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 20:45:06.654963   12928 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210310204459-6496
	W0310 20:45:07.277860   12928 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20210310204459-6496 returned with exit code 1
	W0310 20:45:07.278667   12928 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 20:45:07.298079   12928 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 20:45:07.911197   12928 cli_runner.go:115] Run: docker volume create old-k8s-version-20210310204459-6496 --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 20:45:08.528283   12928 oci.go:102] Successfully created a docker volume old-k8s-version-20210310204459-6496
	I0310 20:45:08.536913   12928 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20210310204459-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --entrypoint /usr/bin/test -v old-k8s-version-20210310204459-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 20:45:13.412101   12928 cli_runner.go:168] Completed: docker run --rm --name old-k8s-version-20210310204459-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --entrypoint /usr/bin/test -v old-k8s-version-20210310204459-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (4.8748804s)
	I0310 20:45:13.412101   12928 oci.go:106] Successfully prepared a docker volume old-k8s-version-20210310204459-6496
	I0310 20:45:13.412101   12928 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0310 20:45:13.412101   12928 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0310 20:45:13.412101   12928 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 20:45:13.420956   12928 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:45:13.420956   12928 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210310204459-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	W0310 20:45:14.051843   12928 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210310204459-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 20:45:14.052427   12928 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20210310204459-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 20:45:14.432368   12928 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0103571s)
	I0310 20:45:14.432732   12928 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:109 OomKillDisable:true NGoroutines:94 SystemTime:2021-03-10 20:45:13.9415889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:45:14.442693   12928 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 20:45:15.468738   12928 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.0260466s)
	I0310 20:45:15.479860   12928 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210310204459-6496 --name old-k8s-version-20210310204459-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --volume old-k8s-version-20210310204459-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 20:45:18.941811   12928 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20210310204459-6496 --name old-k8s-version-20210310204459-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20210310204459-6496 --volume old-k8s-version-20210310204459-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (3.4619553s)
	I0310 20:45:18.951318   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format={{.State.Running}}
	I0310 20:45:19.524049   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format={{.State.Status}}
	I0310 20:45:20.157679   12928 cli_runner.go:115] Run: docker exec old-k8s-version-20210310204459-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 20:45:21.963622   12928 cli_runner.go:168] Completed: docker exec old-k8s-version-20210310204459-6496 stat /var/lib/dpkg/alternatives/iptables: (1.8059453s)
	I0310 20:45:21.964004   12928 oci.go:278] the created container "old-k8s-version-20210310204459-6496" has a running status.
	I0310 20:45:21.964004   12928 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa...
	I0310 20:45:22.361837   12928 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 20:45:23.348600   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format={{.State.Status}}
	I0310 20:45:23.958196   12928 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 20:45:23.958196   12928 kic_runner.go:115] Args: [docker exec --privileged old-k8s-version-20210310204459-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 20:45:24.917870   12928 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa...
	I0310 20:45:25.696877   12928 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format={{.State.Status}}
	I0310 20:45:26.334097   12928 machine.go:88] provisioning docker machine ...
	I0310 20:45:26.334097   12928 ubuntu.go:169] provisioning hostname "old-k8s-version-20210310204459-6496"
	I0310 20:45:26.345166   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:26.954556   12928 main.go:121] libmachine: Using SSH client type: native
	I0310 20:45:26.971298   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	I0310 20:45:26.971298   12928 main.go:121] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210310204459-6496 && echo "old-k8s-version-20210310204459-6496" | sudo tee /etc/hostname
	I0310 20:45:26.980262   12928 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 20:45:30.942198   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210310204459-6496
	
	I0310 20:45:30.949618   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:31.551362   12928 main.go:121] libmachine: Using SSH client type: native
	I0310 20:45:31.551704   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	I0310 20:45:31.551979   12928 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210310204459-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210310204459-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210310204459-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:45:32.358914   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:45:32.358914   12928 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:45:32.358914   12928 ubuntu.go:177] setting up certificates
	I0310 20:45:32.358914   12928 provision.go:83] configureAuth start
	I0310 20:45:32.370381   12928 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210310204459-6496
	I0310 20:45:32.987615   12928 provision.go:137] copyHostCerts
	I0310 20:45:32.988467   12928 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:45:32.988617   12928 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:45:32.988818   12928 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:45:32.994199   12928 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:45:32.994320   12928 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:45:32.994911   12928 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:45:33.002984   12928 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:45:33.003152   12928 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:45:33.003729   12928 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:45:33.006728   12928 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-20210310204459-6496 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210310204459-6496]
	I0310 20:45:33.248434   12928 provision.go:165] copyRemoteCerts
	I0310 20:45:33.258428   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:45:33.266434   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:33.881631   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	I0310 20:45:34.527804   12928 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2693782s)
	I0310 20:45:34.528542   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 20:45:34.862013   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:45:35.103719   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1277 bytes)
	I0310 20:45:35.279505   12928 provision.go:86] duration metric: configureAuth took 2.9205948s
	I0310 20:45:35.279505   12928 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:45:35.296522   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:35.956818   12928 main.go:121] libmachine: Using SSH client type: native
	I0310 20:45:35.957808   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	I0310 20:45:35.958075   12928 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:45:36.590941   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:45:36.590941   12928 ubuntu.go:71] root file system type: overlay
	I0310 20:45:36.600870   12928 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:45:36.620128   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:37.213213   12928 main.go:121] libmachine: Using SSH client type: native
	I0310 20:45:37.214230   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	I0310 20:45:37.214230   12928 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:45:38.141733   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:45:38.155124   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:38.755494   12928 main.go:121] libmachine: Using SSH client type: native
	I0310 20:45:38.756832   12928 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55138 <nil> <nil>}
	I0310 20:45:38.756832   12928 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:45:49.748208   12928 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 20:45:38.114142000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 20:45:49.748208   12928 machine.go:91] provisioned docker machine in 23.4141415s
	I0310 20:45:49.748208   12928 client.go:171] LocalClient.Create took 45.0146683s
	I0310 20:45:49.748208   12928 start.go:168] duration metric: libmachine.API.Create for "old-k8s-version-20210310204459-6496" took 45.0146683s
	I0310 20:45:49.748208   12928 start.go:267] post-start starting for "old-k8s-version-20210310204459-6496" (driver="docker")
	I0310 20:45:49.748208   12928 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:45:49.758614   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:45:49.766365   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:50.422276   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	I0310 20:45:50.905494   12928 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.1466426s)
	I0310 20:45:50.914491   12928 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:45:50.966715   12928 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:45:50.966715   12928 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:45:50.966715   12928 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:45:50.966715   12928 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:45:50.967000   12928 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:45:50.967712   12928 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:45:50.969334   12928 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:45:50.969334   12928 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:45:50.990144   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:45:51.083698   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:45:51.268611   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:45:51.511504   12928 start.go:270] post-start completed in 1.7632983s
	I0310 20:45:51.550746   12928 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210310204459-6496
	I0310 20:45:52.155265   12928 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json ...
	I0310 20:45:52.188530   12928 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:45:52.196673   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:52.867889   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	I0310 20:45:53.353463   12928 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1647767s)
	I0310 20:45:53.353463   12928 start.go:129] duration metric: createHost completed in 48.6231448s
	I0310 20:45:53.354459   12928 start.go:80] releasing machines lock for "old-k8s-version-20210310204459-6496", held for 48.6235465s
	I0310 20:45:53.362679   12928 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210310204459-6496
	I0310 20:45:53.939528   12928 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:45:53.949438   12928 ssh_runner.go:149] Run: systemctl --version
	I0310 20:45:53.953944   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:53.956958   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:54.580367   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	I0310 20:45:54.615840   12928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55138 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	I0310 20:45:54.865625   12928 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:45:55.158095   12928 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.2181659s)
	I0310 20:45:55.168864   12928 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:45:55.262746   12928 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:45:55.271819   12928 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:45:55.396624   12928 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:45:55.604086   12928 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:45:55.693004   12928 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:45:56.722346   12928 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0283425s)
	I0310 20:45:56.732654   12928 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 20:45:56.839967   12928 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:45:57.520716   12928 out.go:150] * Preparing Kubernetes v1.14.0 on Docker 20.10.3 ...
	I0310 20:45:57.530110   12928 cli_runner.go:115] Run: docker exec -t old-k8s-version-20210310204459-6496 dig +short host.docker.internal
	I0310 20:45:58.589310   12928 cli_runner.go:168] Completed: docker exec -t old-k8s-version-20210310204459-6496 dig +short host.docker.internal: (1.0590077s)
	I0310 20:45:58.589761   12928 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:45:58.599975   12928 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:45:58.629255   12928 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:45:58.790284   12928 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 20:45:59.375638   12928 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\client.crt
	I0310 20:45:59.381369   12928 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\client.key
	I0310 20:45:59.385304   12928 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0310 20:45:59.385766   12928 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0310 20:45:59.394606   12928 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:45:59.844362   12928 docker.go:423] Got preloaded images: 
	I0310 20:45:59.844362   12928 docker.go:429] k8s.gcr.io/kube-proxy:v1.14.0 wasn't preloaded
	I0310 20:45:59.855645   12928 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:45:59.949427   12928 ssh_runner.go:149] Run: which lz4
	I0310 20:46:00.005687   12928 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 20:46:00.075060   12928 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 20:46:00.075364   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (488333642 bytes)
	I0310 20:47:10.617697   12928 docker.go:388] Took 70.619571 seconds to copy over tarball
	I0310 20:47:10.639143   12928 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 20:47:55.383138   12928 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (44.7436307s)
	I0310 20:47:55.383138   12928 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 20:47:57.352441   12928 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:47:57.410825   12928 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3123 bytes)
	I0310 20:47:57.642121   12928 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:47:59.081806   12928 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.4396865s)
	I0310 20:47:59.084048   12928 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:48:16.306081   12928 ssh_runner.go:189] Completed: sudo systemctl restart docker: (17.2218131s)
	I0310 20:48:16.318733   12928 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:48:17.207848   12928 docker.go:423] Got preloaded images: -- stdout --
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-proxy:v1.14.0
	k8s.gcr.io/kube-controller-manager:v1.14.0
	k8s.gcr.io/kube-apiserver:v1.14.0
	k8s.gcr.io/kube-scheduler:v1.14.0
	k8s.gcr.io/coredns:1.3.1
	k8s.gcr.io/etcd:3.3.10
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0310 20:48:17.208570   12928 cache_images.go:73] Images are preloaded, skipping loading
	I0310 20:48:17.241235   12928 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 20:48:19.442860   12928 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (2.2016277s)
	I0310 20:48:19.443320   12928 cni.go:74] Creating CNI manager for ""
	I0310 20:48:19.443320   12928 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:48:19.443320   12928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 20:48:19.443567   12928 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210310204459-6496 NodeName:old-k8s-version-20210310204459-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 20:48:19.443998   12928 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20210310204459-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210310204459-6496
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://172.17.0.3:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 20:48:19.444438   12928 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20210310204459-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 20:48:19.454962   12928 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0310 20:48:19.534133   12928 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 20:48:19.543848   12928 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 20:48:19.611003   12928 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	I0310 20:48:19.972107   12928 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 20:48:20.275076   12928 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1928 bytes)
	I0310 20:48:20.500483   12928 ssh_runner.go:149] Run: grep 172.17.0.3	control-plane.minikube.internal$ /etc/hosts
	I0310 20:48:20.541378   12928 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:48:20.668643   12928 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496 for IP: 172.17.0.3
	I0310 20:48:20.668643   12928 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 20:48:20.668643   12928 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 20:48:20.668643   12928 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\client.key
	I0310 20:48:20.668643   12928 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0
	I0310 20:48:20.668643   12928 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 20:48:20.862651   12928 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0 ...
	I0310 20:48:20.862651   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0: {Name:mk4d990127210c9e93f70bb2fa83fed3ed7d8272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:48:20.886181   12928 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0 ...
	I0310 20:48:20.886181   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0: {Name:mk3b112be41963d8a84df37233731d1e05b06ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:48:20.895703   12928 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt.0f3e66d0 -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt
	I0310 20:48:20.899122   12928 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0 -> C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key
	I0310 20:48:20.906198   12928 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key
	I0310 20:48:20.906198   12928 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt with IP's: []
	I0310 20:48:21.063572   12928 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt ...
	I0310 20:48:21.064582   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt: {Name:mkc85b22c9bece2080565bade554ebf8aae7c395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:48:21.073606   12928 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key ...
	I0310 20:48:21.073606   12928 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key: {Name:mkc400cbeb274a69f5d3aa3f494371d783186217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:48:21.085586   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 20:48:21.085586   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.085586   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 20:48:21.086578   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.086578   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 20:48:21.086578   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.086578   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 20:48:21.087590   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.087590   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 20:48:21.087590   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.087590   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 20:48:21.088583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.088583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 20:48:21.088583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.088583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 20:48:21.089583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.089583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 20:48:21.089583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.089583   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 20:48:21.089583   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.090584   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 20:48:21.090584   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.090584   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 20:48:21.090584   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.090584   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 20:48:21.091582   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.091582   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 20:48:21.091582   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.091582   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 20:48:21.092581   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.092581   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 20:48:21.092581   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.092581   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 20:48:21.093639   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.093639   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 20:48:21.093639   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.093639   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 20:48:21.093639   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.094657   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 20:48:21.094657   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.094657   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 20:48:21.094657   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.094657   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 20:48:21.095655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.095655   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 20:48:21.095655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.095655   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 20:48:21.096667   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.096667   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 20:48:21.096667   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.096667   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 20:48:21.097630   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.097630   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 20:48:21.097630   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.097630   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 20:48:21.098655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.098655   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 20:48:21.098655   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.099460   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 20:48:21.099877   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 20:48:21.100608   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.100608   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 20:48:21.104844   12928 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 20:48:21.105242   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 20:48:21.105585   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 20:48:21.105585   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 20:48:21.106404   12928 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 20:48:21.117199   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 20:48:21.513740   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 20:48:21.760642   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 20:48:22.037866   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0310 20:48:22.744649   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 20:48:23.068283   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 20:48:23.437363   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 20:48:23.716689   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 20:48:24.144080   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 20:48:24.496544   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 20:48:24.706713   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 20:48:24.888872   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 20:48:25.072441   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 20:48:25.295478   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 20:48:25.539068   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 20:48:25.786296   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 20:48:25.971391   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 20:48:26.186445   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 20:48:26.360798   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 20:48:26.691872   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 20:48:26.874779   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 20:48:27.094519   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 20:48:27.926836   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 20:48:28.188557   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 20:48:28.394647   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 20:48:28.790329   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 20:48:29.094073   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 20:48:29.321739   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 20:48:29.607788   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 20:48:29.785678   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 20:48:30.032036   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 20:48:30.338918   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 20:48:30.567680   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 20:48:30.799494   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 20:48:30.986912   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 20:48:31.213564   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 20:48:31.533104   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 20:48:31.806558   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 20:48:32.025579   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 20:48:32.313142   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 20:48:32.820375   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 20:48:33.091682   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 20:48:33.361952   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 20:48:33.546662   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 20:48:33.880265   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 20:48:34.176826   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 20:48:34.443774   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 20:48:34.820050   12928 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 20:48:35.243486   12928 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 20:48:35.500540   12928 ssh_runner.go:149] Run: openssl version
	I0310 20:48:35.574294   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 20:48:35.650674   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 20:48:35.681694   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 20:48:35.695254   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 20:48:35.748262   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:35.815809   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 20:48:35.910004   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 20:48:35.970953   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 20:48:35.986910   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 20:48:36.035569   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:36.169036   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 20:48:36.248538   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 20:48:36.281799   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 20:48:36.296020   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 20:48:36.339195   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:36.400862   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 20:48:36.510827   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 20:48:36.544150   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 20:48:36.554515   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 20:48:36.603970   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:36.661485   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 20:48:36.741293   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 20:48:36.771506   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 20:48:36.782867   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 20:48:36.831341   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:36.901844   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 20:48:36.960231   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 20:48:36.991039   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 20:48:37.001755   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 20:48:37.120470   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:37.219779   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 20:48:37.350451   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 20:48:37.374271   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 20:48:37.385332   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 20:48:37.493863   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:37.554705   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 20:48:37.655597   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 20:48:37.694141   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 20:48:37.710161   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 20:48:37.762947   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:37.827899   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 20:48:37.894174   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 20:48:37.915678   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 20:48:37.927030   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 20:48:37.967096   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:38.026740   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 20:48:38.090043   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 20:48:38.126200   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 20:48:38.136147   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 20:48:38.239371   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:38.331111   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 20:48:38.413542   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 20:48:38.442408   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 20:48:38.452607   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 20:48:38.499760   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:38.577229   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 20:48:38.641034   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 20:48:38.674442   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 20:48:38.690347   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 20:48:38.757770   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:38.864479   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 20:48:38.942637   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 20:48:39.028587   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 20:48:39.038752   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 20:48:39.107231   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:39.167667   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 20:48:39.232613   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 20:48:39.263457   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 20:48:39.273269   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 20:48:39.334847   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:39.402674   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 20:48:39.571265   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 20:48:39.604484   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 20:48:39.633603   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 20:48:39.805793   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:39.880251   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 20:48:39.959855   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 20:48:40.024594   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 20:48:40.036081   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 20:48:40.106252   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:40.184730   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 20:48:40.351306   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 20:48:40.391835   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 20:48:40.401307   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 20:48:40.477301   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:40.545447   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 20:48:40.618259   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 20:48:40.658602   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 20:48:40.674461   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 20:48:40.736048   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:40.835411   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 20:48:40.908627   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 20:48:40.949747   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 20:48:40.967671   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 20:48:41.352067   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:41.417455   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 20:48:41.517072   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 20:48:41.546197   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 20:48:41.556596   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 20:48:41.613329   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:41.714299   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 20:48:41.825921   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 20:48:41.849980   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 20:48:41.864898   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 20:48:41.922840   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:42.017877   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 20:48:42.178610   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 20:48:42.222069   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 20:48:42.232994   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 20:48:42.311917   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:42.444203   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 20:48:42.551950   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 20:48:42.626121   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 20:48:42.637532   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 20:48:42.693402   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:42.768815   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 20:48:42.854401   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 20:48:42.900184   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 20:48:42.908801   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 20:48:43.052775   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:43.262207   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 20:48:43.395161   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 20:48:43.457844   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 20:48:43.478189   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 20:48:43.562538   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:43.709162   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 20:48:43.876180   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 20:48:43.943286   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 20:48:43.962138   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 20:48:44.029379   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:44.217921   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 20:48:44.301845   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 20:48:44.376495   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 20:48:44.387476   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 20:48:44.456446   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:44.592511   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 20:48:44.700759   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 20:48:44.738480   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 20:48:44.748403   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 20:48:44.805136   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:44.928726   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 20:48:45.044469   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:48:45.095289   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:48:45.113350   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:48:45.186929   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 20:48:45.321146   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 20:48:45.445664   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 20:48:45.482187   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 20:48:45.502749   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 20:48:45.554720   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:45.629427   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 20:48:45.697187   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 20:48:45.727043   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 20:48:45.738673   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 20:48:45.808330   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:45.893296   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 20:48:45.982785   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 20:48:46.020660   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 20:48:46.035740   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 20:48:46.097117   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:46.169019   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 20:48:46.291358   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 20:48:46.386018   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 20:48:46.420499   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 20:48:46.475764   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:46.630846   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 20:48:46.703689   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 20:48:46.735499   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 20:48:46.754694   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 20:48:46.814457   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:46.903052   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 20:48:46.969551   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 20:48:47.014846   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 20:48:47.025370   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 20:48:47.103567   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:47.194455   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 20:48:47.297186   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 20:48:47.356622   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 20:48:47.360289   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 20:48:47.460715   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:47.546824   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 20:48:47.627035   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 20:48:47.662210   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 20:48:47.673768   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 20:48:47.749535   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:47.806523   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 20:48:47.961579   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 20:48:47.999154   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 20:48:48.008851   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 20:48:48.071015   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:48.172030   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 20:48:48.248251   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 20:48:48.281154   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 20:48:48.296457   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 20:48:48.348775   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:48.439206   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 20:48:48.554057   12928 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 20:48:48.618046   12928 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 20:48:48.632735   12928 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 20:48:48.683261   12928 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 20:48:48.768887   12928 kubeadm.go:385] StartCluster: {Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:48:48.775847   12928 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:48:49.462903   12928 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 20:48:49.542762   12928 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:48:49.646072   12928 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:48:49.654944   12928 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:48:49.767382   12928 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:48:49.767628   12928 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 20:54:37.895370   12928 out.go:150]   - Generating certificates and keys ...
	I0310 20:54:37.908347   12928 out.go:150]   - Booting up control plane ...
	W0310 20:54:37.997892   12928 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20210310204459-6496 localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20210310204459-6496 localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20210310204459-6496 localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20210310204459-6496 localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	
	I0310 20:54:37.997892   12928 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0310 20:56:16.705256   12928 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m38.708404s)
	I0310 20:56:16.705923   12928 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0310 20:56:17.101905   12928 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:56:18.192476   12928 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (1.0905809s)
	I0310 20:56:18.193509   12928 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:56:18.202968   12928 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:56:18.305687   12928 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:56:18.305687   12928 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 20:59:55.675281   12928 out.go:150]   - Generating certificates and keys ...
	I0310 20:59:55.685769   12928 out.go:150]   - Booting up control plane ...
	I0310 20:59:55.689643   12928 kubeadm.go:387] StartCluster complete in 11m6.9239324s
	I0310 20:59:55.696725   12928 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 20:59:58.337206   12928 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (2.6404936s)
	I0310 20:59:58.338132   12928 logs.go:255] 1 containers: [d960ab78b04e]
	I0310 20:59:58.350942   12928 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 21:00:02.154129   12928 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (3.8032035s)
	I0310 21:00:02.154129   12928 logs.go:255] 1 containers: [f6d5d44ee6e5]
	I0310 21:00:02.162205   12928 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 21:00:05.417405   12928 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (3.2547179s)
	I0310 21:00:05.417405   12928 logs.go:255] 0 containers: []
	W0310 21:00:05.417405   12928 logs.go:257] No container was found matching "coredns"
	I0310 21:00:05.433820   12928 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 21:00:12.371129   12928 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (6.9373391s)
	I0310 21:00:12.371129   12928 logs.go:255] 1 containers: [8543b072b7ef]
	I0310 21:00:12.401685   12928 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 21:00:19.207543   12928 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (6.8056113s)
	I0310 21:00:19.207543   12928 logs.go:255] 0 containers: []
	W0310 21:00:19.207543   12928 logs.go:257] No container was found matching "kube-proxy"
	I0310 21:00:19.223079   12928 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 21:00:26.537038   12928 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (7.3137669s)
	I0310 21:00:26.537038   12928 logs.go:255] 0 containers: []
	W0310 21:00:26.537038   12928 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 21:00:26.546005   12928 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 21:00:33.036382   12928 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (6.4904031s)
	I0310 21:00:33.036382   12928 logs.go:255] 0 containers: []
	W0310 21:00:33.036382   12928 logs.go:257] No container was found matching "storage-provisioner"
	I0310 21:00:33.047163   12928 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 21:00:38.709937   12928 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (5.6620545s)
	I0310 21:00:38.709937   12928 logs.go:255] 1 containers: [fe2d17c8ecdb]
	I0310 21:00:38.709937   12928 logs.go:122] Gathering logs for kube-controller-manager [fe2d17c8ecdb] ...
	I0310 21:00:38.710243   12928 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 fe2d17c8ecdb"
	I0310 21:00:48.090202   12928 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 fe2d17c8ecdb": (9.3798616s)
	I0310 21:00:48.092311   12928 logs.go:122] Gathering logs for Docker ...
	I0310 21:00:48.092311   12928 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 21:00:50.351746   12928 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (2.2590981s)
	I0310 21:00:50.359052   12928 logs.go:122] Gathering logs for etcd [f6d5d44ee6e5] ...
	I0310 21:00:50.359052   12928 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f6d5d44ee6e5"
	I0310 21:01:04.120486   12928 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 f6d5d44ee6e5": (13.7614865s)
	I0310 21:01:04.130160   12928 logs.go:122] Gathering logs for kube-scheduler [8543b072b7ef] ...
	I0310 21:01:04.130160   12928 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 8543b072b7ef"
	I0310 21:01:12.070968   12928 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 8543b072b7ef": (7.9408376s)
	I0310 21:01:12.134625   12928 logs.go:122] Gathering logs for container status ...
	I0310 21:01:12.134625   12928 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 21:01:17.991524   12928 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (5.85692s)
	I0310 21:01:17.992634   12928 logs.go:122] Gathering logs for kubelet ...
	I0310 21:01:17.992634   12928 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 21:01:21.053407   12928 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (3.060784s)
	W0310 21:01:21.115679   12928 logs.go:137] Found kubelet problem: Mar 10 21:00:00 old-k8s-version-20210310204459-6496 kubelet[3941]: E0310 21:00:00.391641    3941 pod_workers.go:190] Error syncing pod 3a9cb0607c644e32b5d6d0cd9bcdb263 ("kube-controller-manager-old-k8s-version-20210310204459-6496_kube-system(3a9cb0607c644e32b5d6d0cd9bcdb263)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-20210310204459-6496_kube-system(3a9cb0607c644e32b5d6d0cd9bcdb263)"
	I0310 21:01:21.150556   12928 logs.go:122] Gathering logs for dmesg ...
	I0310 21:01:21.150556   12928 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 21:01:23.176667   12928 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (2.0257563s)
	I0310 21:01:23.178956   12928 logs.go:122] Gathering logs for describe nodes ...
	I0310 21:01:23.179243   12928 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 21:01:54.010302   12928 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (30.8309118s)
	I0310 21:01:54.012811   12928 logs.go:122] Gathering logs for kube-apiserver [d960ab78b04e] ...
	I0310 21:01:54.012811   12928 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d960ab78b04e"
	I0310 21:01:57.365008   12928 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 d960ab78b04e": (3.3522076s)
	W0310 21:01:57.392812   12928 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[apiclient] All control plane components are healthy after 156.854731 seconds
	[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase upload-config/kubelet: error creating kubelet configuration ConfigMap: unable to create configmap: Post https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps: unexpected EOF
	W0310 21:01:57.393832   12928 out.go:191] * 
	* 
	W0310 21:01:57.393832   12928 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[apiclient] All control plane components are healthy after 156.854731 seconds
	[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase upload-config/kubelet: error creating kubelet configuration ConfigMap: unable to create configmap: Post https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps: unexpected EOF
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[apiclient] All control plane components are healthy after 156.854731 seconds
	[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase upload-config/kubelet: error creating kubelet configuration ConfigMap: unable to create configmap: Post https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps: unexpected EOF
	
	W0310 21:01:57.393832   12928 out.go:191] * 
	* 
	W0310 21:01:57.393832   12928 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue:
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	W0310 21:01:57.393832   12928 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:01:57.396821   12928 out.go:129] X Problems detected in kubelet:
	I0310 21:01:57.399818   12928 out.go:129]   - Mar 10 21:00:00 old-k8s-version-20210310204459-6496 kubelet[3941]: E0310 21:00:00.391641    3941 pod_workers.go:190] Error syncing pod 3a9cb0607c644e32b5d6d0cd9bcdb263 ("kube-controller-manager-old-k8s-version-20210310204459-6496_kube-system(3a9cb0607c644e32b5d6d0cd9bcdb263)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-20210310204459-6496_kube-system(3a9cb0607c644e32b5d6d0cd9bcdb263)"
	I0310 21:01:57.404813   12928 out.go:129] 
	W0310 21:01:57.404813   12928 out.go:191] X Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[apiclient] All control plane components are healthy after 156.854731 seconds
	[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase upload-config/kubelet: error creating kubelet configuration ConfigMap: unable to create configmap: Post https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps: unexpected EOF
	
	X Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.14.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[apiclient] All control plane components are healthy after 156.854731 seconds
	[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase upload-config/kubelet: error creating kubelet configuration ConfigMap: unable to create configmap: Post https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps: unexpected EOF
	
	W0310 21:01:57.404813   12928 out.go:191] * 
	* 
	W0310 21:01:57.404813   12928 out.go:191] * If the above advice does not help, please let us know: 
	* If the above advice does not help, please let us know: 
	W0310 21:01:57.404813   12928 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:01:57.407811   12928 out.go:129] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:157: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20210310204459-6496 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0": exit status 80
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect old-k8s-version-20210310204459-6496
helpers_test.go:231: (dbg) docker inspect old-k8s-version-20210310204459-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419",
	        "Created": "2021-03-10T20:45:16.4180529Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:45:18.8406137Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/hostname",
	        "HostsPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/hosts",
	        "LogPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419-json.log",
	        "Name": "/old-k8s-version-20210310204459-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210310204459-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210310204459-6496",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210310204459-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210310204459-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210310204459-6496",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210310204459-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "662f69f6007bf2082ebf95584d957493637e9b0c1e109934b80acf5f0ff8e63d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55137"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55135"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/662f69f6007b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "46d89bd8b457de38f652bea1f5633541acb2c2620431fe89b1f183bf349b403b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "46d89bd8b457de38f652bea1f5633541acb2c2620431fe89b1f183bf349b403b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210310204459-6496 -n old-k8s-version-20210310204459-6496
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210310204459-6496 -n old-k8s-version-20210310204459-6496: exit status 4 (6.3358643s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:02:03.968047   14360 status.go:396] kubeconfig endpoint: extract IP: "old-k8s-version-20210310204459-6496" does not appear in C:\Users\jenkins/.kube/config

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 4 (may be ok)
helpers_test.go:237: "old-k8s-version-20210310204459-6496" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (1025.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (1546.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20210310204947-6496 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.20.5-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20210310204947-6496 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.20.5-rc.0: exit status 109 (25m33.9225957s)

                                                
                                                
-- stdout --
	* [no-preload-20210310204947-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	* Starting control plane node no-preload-20210310204947-6496 in cluster no-preload-20210310204947-6496
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:49:48.039303   19328 out.go:239] Setting OutFile to fd 1996 ...
	I0310 20:49:48.041276   19328 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:49:48.041276   19328 out.go:252] Setting ErrFile to fd 1880...
	I0310 20:49:48.041276   19328 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:49:48.068363   19328 out.go:246] Setting JSON to false
	I0310 20:49:48.074055   19328 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":34854,"bootTime":1615374534,"procs":202,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:49:48.074055   19328 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:49:48.074055   19328 out.go:129] * [no-preload-20210310204947-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:49:48.087480   19328 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:49:48.102549   19328 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:49:48.789570   19328 docker.go:119] docker version: linux-20.10.2
	I0310 20:49:48.795025   19328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:49:50.498885   19328 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.7038628s)
	I0310 20:49:50.500314   19328 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:150 OomKillDisable:true NGoroutines:302 SystemTime:2021-03-10 20:49:49.3261231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:49:50.514183   19328 out.go:129] * Using the docker driver based on user configuration
	I0310 20:49:50.514183   19328 start.go:276] selected driver: docker
	I0310 20:49:50.514183   19328 start.go:718] validating driver "docker" against <nil>
	I0310 20:49:50.514183   19328 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:49:52.505782   19328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:49:54.191089   19328 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.6853084s)
	I0310 20:49:54.191619   19328 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:155 OomKillDisable:true NGoroutines:140 SystemTime:2021-03-10 20:49:53.7304502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:49:54.192066   19328 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 20:49:54.192535   19328 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 20:49:54.192535   19328 cni.go:74] Creating CNI manager for ""
	I0310 20:49:54.192535   19328 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:49:54.192535   19328 start_flags.go:398] config:
	{Name:no-preload-20210310204947-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:49:54.192535   19328 out.go:129] * Starting control plane node no-preload-20210310204947-6496 in cluster no-preload-20210310204947-6496
	I0310 20:49:54.873243   19328 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:49:54.873243   19328 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:49:54.873243   19328 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 20:49:54.873243   19328 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\config.json ...
	I0310 20:49:54.874440   19328 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\config.json: {Name:mkbb12066689002bb41326a60586a67f53652d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:49:54.874821   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0
	I0310 20:49:54.874821   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0
	I0310 20:49:54.874821   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0310 20:49:54.874821   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0
	I0310 20:49:54.874821   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4
	I0310 20:49:54.874821   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2
	I0310 20:49:54.875823   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns:1.7.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0
	I0310 20:49:54.874821   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0310 20:49:54.874821   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0
	I0310 20:49:54.874821   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd:3.4.13-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0
	I0310 20:49:54.894701   19328 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:49:54.895531   19328 start.go:313] acquiring machines lock for no-preload-20210310204947-6496: {Name:mk5ccb5ca2d8ac74aacc5a5439e34ebf8c484f4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:54.895928   19328 start.go:317] acquired machines lock for "no-preload-20210310204947-6496" in 396.9??s
	I0310 20:49:54.896289   19328 start.go:89] Provisioning new machine with config: &{Name:no-preload-20210310204947-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}
	I0310 20:49:54.896741   19328 start.go:126] createHost starting for "" (driver="docker")
	I0310 20:49:54.896741   19328 out.go:150] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0310 20:49:54.896741   19328 start.go:160] libmachine.API.Create for "no-preload-20210310204947-6496" (driver="docker")
	I0310 20:49:54.896741   19328 client.go:168] LocalClient.Create starting
	I0310 20:49:54.901868   19328 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 20:49:54.902331   19328 main.go:121] libmachine: Decoding PEM data...
	I0310 20:49:54.902331   19328 main.go:121] libmachine: Parsing certificate...
	I0310 20:49:54.903067   19328 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 20:49:54.903067   19328 main.go:121] libmachine: Decoding PEM data...
	I0310 20:49:54.903434   19328 main.go:121] libmachine: Parsing certificate...
	I0310 20:49:54.986556   19328 cli_runner.go:115] Run: docker network inspect no-preload-20210310204947-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:49:55.233786   19328 cache.go:93] acquiring lock: {Name:mk1bbd52b1d425b987a80d1b42ea65a1daa62351 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:55.233786   19328 cache.go:93] acquiring lock: {Name:mk1b99eb2e55fdc5ddc042a4b3db75d12b25fe0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:55.234475   19328 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 exists
	I0310 20:49:55.234475   19328 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0 exists
	I0310 20:49:55.235003   19328 cache.go:82] cache image "k8s.gcr.io/pause:3.2" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.2" took 360.1823ms
	I0310 20:49:55.235003   19328 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 succeeded
	I0310 20:49:55.235003   19328 cache.go:82] cache image "k8s.gcr.io/kube-scheduler:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.20.5-rc.0" took 360.1823ms
	I0310 20:49:55.235003   19328 cache.go:66] save to tar file k8s.gcr.io/kube-scheduler:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0 succeeded
	I0310 20:49:55.264714   19328 cache.go:93] acquiring lock: {Name:mk33908c5692f6fbcea93524c073786bb1491be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:55.265334   19328 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 exists
	I0310 20:49:55.265848   19328 cache.go:82] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.1.0" took 390.0261ms
	I0310 20:49:55.265848   19328 cache.go:66] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 succeeded
	I0310 20:49:55.270477   19328 cache.go:93] acquiring lock: {Name:mk808ab2b8e2f585b88e9b77052dedca3569e605 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:55.271177   19328 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0 exists
	I0310 20:49:55.271941   19328 cache.go:82] cache image "k8s.gcr.io/coredns:1.7.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\coredns_1.7.0" took 396.1187ms
	I0310 20:49:55.271941   19328 cache.go:66] save to tar file k8s.gcr.io/coredns:1.7.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0 succeeded
	I0310 20:49:55.276537   19328 cache.go:93] acquiring lock: {Name:mk1cd59bbb5d30900e0d5b8983f100ccfb4e941e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:55.277577   19328 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0 exists
	I0310 20:49:55.277816   19328 cache.go:82] cache image "k8s.gcr.io/kube-apiserver:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.20.5-rc.0" took 402.9962ms
	I0310 20:49:55.277816   19328 cache.go:66] save to tar file k8s.gcr.io/kube-apiserver:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0 succeeded
	I0310 20:49:55.278176   19328 cache.go:93] acquiring lock: {Name:mk95277aa1d8baa6ce693324ce93a259561b3b0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:55.278881   19328 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 exists
	I0310 20:49:55.279206   19328 cache.go:82] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.4" took 404.3855ms
	I0310 20:49:55.279206   19328 cache.go:66] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 succeeded
	I0310 20:49:55.283518   19328 cache.go:93] acquiring lock: {Name:mk7d69590a92a29aed7b81b57dbd7aa08bae9b7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:55.283518   19328 cache.go:93] acquiring lock: {Name:mk4f17964ab104a7a51fdfe4d0d8adcb99a8f701 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:55.283935   19328 cache.go:93] acquiring lock: {Name:mk7dad12c4700ffd6e4a91c1377bd452302d3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:55.284147   19328 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0 exists
	I0310 20:49:55.284147   19328 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0 exists
	I0310 20:49:55.284517   19328 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0 exists
	I0310 20:49:55.284517   19328 cache.go:82] cache image "k8s.gcr.io/kube-proxy:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.20.5-rc.0" took 409.6968ms
	I0310 20:49:55.284517   19328 cache.go:66] save to tar file k8s.gcr.io/kube-proxy:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0 succeeded
	I0310 20:49:55.284751   19328 cache.go:82] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.4.13-0" took 408.2668ms
	I0310 20:49:55.284751   19328 cache.go:66] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0 succeeded
	I0310 20:49:55.284751   19328 cache.go:82] cache image "k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.20.5-rc.0" took 408.9289ms
	I0310 20:49:55.284751   19328 cache.go:66] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0 succeeded
	I0310 20:49:55.294486   19328 cache.go:93] acquiring lock: {Name:mkf95068147fb9802daffb44f03793cdfc94af80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:49:55.295053   19328 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 exists
	I0310 20:49:55.295425   19328 cache.go:82] cache image "gcr.io/k8s-minikube/storage-provisioner:v4" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v4" took 420.6045ms
	I0310 20:49:55.295425   19328 cache.go:66] save to tar file gcr.io/k8s-minikube/storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 succeeded
	I0310 20:49:55.295425   19328 cache.go:73] Successfully saved all images to host disk.
	W0310 20:49:55.711690   19328 cli_runner.go:162] docker network inspect no-preload-20210310204947-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 20:49:55.721766   19328 network_create.go:240] running [docker network inspect no-preload-20210310204947-6496] to gather additional debugging logs...
	I0310 20:49:55.721899   19328 cli_runner.go:115] Run: docker network inspect no-preload-20210310204947-6496
	W0310 20:49:56.368697   19328 cli_runner.go:162] docker network inspect no-preload-20210310204947-6496 returned with exit code 1
	I0310 20:49:56.368697   19328 network_create.go:243] error running [docker network inspect no-preload-20210310204947-6496]: docker network inspect no-preload-20210310204947-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20210310204947-6496
	I0310 20:49:56.368971   19328 network_create.go:245] output of [docker network inspect no-preload-20210310204947-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20210310204947-6496
	
	** /stderr **
	I0310 20:49:56.377434   19328 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:49:57.044882   19328 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 20:49:57.045218   19328 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: no-preload-20210310204947-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 20:49:57.054038   19328 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210310204947-6496
	W0310 20:49:57.672380   19328 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20210310204947-6496 returned with exit code 1
	W0310 20:49:57.673474   19328 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 20:49:57.691999   19328 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 20:49:58.349734   19328 cli_runner.go:115] Run: docker volume create no-preload-20210310204947-6496 --label name.minikube.sigs.k8s.io=no-preload-20210310204947-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 20:49:59.008326   19328 oci.go:102] Successfully created a docker volume no-preload-20210310204947-6496
	I0310 20:49:59.017070   19328 cli_runner.go:115] Run: docker run --rm --name no-preload-20210310204947-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210310204947-6496 --entrypoint /usr/bin/test -v no-preload-20210310204947-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 20:50:06.866620   19328 cli_runner.go:168] Completed: docker run --rm --name no-preload-20210310204947-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210310204947-6496 --entrypoint /usr/bin/test -v no-preload-20210310204947-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (7.8495603s)
	I0310 20:50:06.867266   19328 oci.go:106] Successfully prepared a docker volume no-preload-20210310204947-6496
	I0310 20:50:06.867835   19328 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 20:50:06.878552   19328 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:50:07.917154   19328 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0386036s)
	I0310 20:50:07.918012   19328 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:7 ContainersRunning:7 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:103 OomKillDisable:true NGoroutines:84 SystemTime:2021-03-10 20:50:07.4616222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:50:07.928161   19328 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 20:50:08.929430   19328 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20210310204947-6496 --name no-preload-20210310204947-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210310204947-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20210310204947-6496 --volume no-preload-20210310204947-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 20:50:18.398557   19328 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20210310204947-6496 --name no-preload-20210310204947-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20210310204947-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20210310204947-6496 --volume no-preload-20210310204947-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (9.4689758s)
	I0310 20:50:18.416453   19328 cli_runner.go:115] Run: docker container inspect no-preload-20210310204947-6496 --format={{.State.Running}}
	I0310 20:50:19.158233   19328 cli_runner.go:115] Run: docker container inspect no-preload-20210310204947-6496 --format={{.State.Status}}
	I0310 20:50:20.140315   19328 cli_runner.go:115] Run: docker exec no-preload-20210310204947-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 20:50:23.078101   19328 cli_runner.go:168] Completed: docker exec no-preload-20210310204947-6496 stat /var/lib/dpkg/alternatives/iptables: (2.9376886s)
	I0310 20:50:23.078101   19328 oci.go:278] the created container "no-preload-20210310204947-6496" has a running status.
	I0310 20:50:23.078286   19328 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa...
	I0310 20:50:23.648672   19328 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 20:50:25.654812   19328 cli_runner.go:115] Run: docker container inspect no-preload-20210310204947-6496 --format={{.State.Status}}
	I0310 20:50:26.401717   19328 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 20:50:26.401717   19328 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210310204947-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 20:50:28.890169   19328 kic_runner.go:124] Done: [docker exec --privileged no-preload-20210310204947-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (2.4884557s)
	I0310 20:50:28.892168   19328 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa...
	I0310 20:50:29.727679   19328 cli_runner.go:115] Run: docker container inspect no-preload-20210310204947-6496 --format={{.State.Status}}
	I0310 20:50:30.368729   19328 machine.go:88] provisioning docker machine ...
	I0310 20:50:30.368729   19328 ubuntu.go:169] provisioning hostname "no-preload-20210310204947-6496"
	I0310 20:50:30.377718   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:50:31.018224   19328 main.go:121] libmachine: Using SSH client type: native
	I0310 20:50:31.019464   19328 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55143 <nil> <nil>}
	I0310 20:50:31.019464   19328 main.go:121] libmachine: About to run SSH command:
	sudo hostname no-preload-20210310204947-6496 && echo "no-preload-20210310204947-6496" | sudo tee /etc/hostname
	I0310 20:50:31.039582   19328 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 20:50:35.807046   19328 main.go:121] libmachine: SSH cmd err, output: <nil>: no-preload-20210310204947-6496
	
	I0310 20:50:35.815322   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:50:36.512656   19328 main.go:121] libmachine: Using SSH client type: native
	I0310 20:50:36.513175   19328 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55143 <nil> <nil>}
	I0310 20:50:36.513526   19328 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210310204947-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210310204947-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210310204947-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:50:37.823665   19328 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:50:37.823883   19328 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:50:37.823883   19328 ubuntu.go:177] setting up certificates
	I0310 20:50:37.823883   19328 provision.go:83] configureAuth start
	I0310 20:50:37.833505   19328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210310204947-6496
	I0310 20:50:38.549485   19328 provision.go:137] copyHostCerts
	I0310 20:50:38.550251   19328 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:50:38.550251   19328 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:50:38.550694   19328 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:50:38.564332   19328 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:50:38.565131   19328 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:50:38.565532   19328 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:50:38.570320   19328 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:50:38.570320   19328 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:50:38.571274   19328 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:50:38.582189   19328 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.no-preload-20210310204947-6496 san=[172.17.0.7 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210310204947-6496]
	I0310 20:50:39.097807   19328 provision.go:165] copyRemoteCerts
	I0310 20:50:39.107728   19328 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:50:39.112259   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:50:39.773802   19328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 20:50:40.399708   19328 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2919822s)
	I0310 20:50:40.400181   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:50:40.748188   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0310 20:50:41.223522   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0310 20:50:41.594330   19328 provision.go:86] duration metric: configureAuth took 3.7704512s
	I0310 20:50:41.594330   19328 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:50:41.616605   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:50:42.313275   19328 main.go:121] libmachine: Using SSH client type: native
	I0310 20:50:42.313763   19328 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55143 <nil> <nil>}
	I0310 20:50:42.313763   19328 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:50:43.701381   19328 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:50:43.702173   19328 ubuntu.go:71] root file system type: overlay
	I0310 20:50:43.703089   19328 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:50:43.711816   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:50:44.485398   19328 main.go:121] libmachine: Using SSH client type: native
	I0310 20:50:44.486064   19328 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55143 <nil> <nil>}
	I0310 20:50:44.486787   19328 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:50:45.656364   19328 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:50:45.672017   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:50:46.282472   19328 main.go:121] libmachine: Using SSH client type: native
	I0310 20:50:46.282959   19328 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55143 <nil> <nil>}
	I0310 20:50:46.282959   19328 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:51:03.177004   19328 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 20:50:45.645182000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 20:51:03.177572   19328 machine.go:91] provisioned docker machine in 32.8083175s
	I0310 20:51:03.177572   19328 client.go:171] LocalClient.Create took 1m8.2809196s
	I0310 20:51:03.177572   19328 start.go:168] duration metric: libmachine.API.Create for "no-preload-20210310204947-6496" took 1m8.2809196s
	I0310 20:51:03.177572   19328 start.go:267] post-start starting for "no-preload-20210310204947-6496" (driver="docker")
	I0310 20:51:03.177572   19328 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:51:03.186801   19328 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:51:03.195863   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:51:03.828767   19328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 20:51:04.239896   19328 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0530963s)
	I0310 20:51:04.241332   19328 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:51:04.278617   19328 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:51:04.278811   19328 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:51:04.278811   19328 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:51:04.278904   19328 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:51:04.287258   19328 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:51:04.287716   19328 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:51:04.292317   19328 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:51:04.293892   19328 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:51:04.300613   19328 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:51:04.553498   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:51:04.918473   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:51:05.923435   19328 start.go:270] post-start completed in 2.7458674s
	I0310 20:51:06.183700   19328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210310204947-6496
	I0310 20:51:07.540968   19328 cli_runner.go:168] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210310204947-6496: (1.3572705s)
	I0310 20:51:07.541963   19328 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\config.json ...
	I0310 20:51:07.570955   19328 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:51:07.579729   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:51:08.239989   19328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 20:51:08.715373   19328 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1444196s)
	I0310 20:51:08.715373   19328 start.go:129] duration metric: createHost completed in 1m13.8187281s
	I0310 20:51:08.715373   19328 start.go:80] releasing machines lock for "no-preload-20210310204947-6496", held for 1m13.8191803s
	I0310 20:51:08.722906   19328 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210310204947-6496
	I0310 20:51:09.349974   19328 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:51:09.359932   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:51:09.364827   19328 ssh_runner.go:149] Run: systemctl --version
	I0310 20:51:09.386605   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:51:10.035707   19328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 20:51:10.078855   19328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 20:51:11.293465   19328 ssh_runner.go:189] Completed: systemctl --version: (1.9286399s)
	I0310 20:51:11.296002   19328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:51:11.296002   19328 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.9460298s)
	I0310 20:51:11.565504   19328 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:51:11.718557   19328 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:51:11.728362   19328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:51:11.920195   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:51:12.201098   19328 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:51:12.349903   19328 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:51:14.034751   19328 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.6843926s)
	I0310 20:51:14.043555   19328 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 20:51:14.169650   19328 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:51:15.556008   19328 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (1.3863594s)
	I0310 20:51:15.559113   19328 out.go:150] * Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...
	I0310 20:51:15.559113   19328 cli_runner.go:115] Run: docker exec -t no-preload-20210310204947-6496 dig +short host.docker.internal
	I0310 20:51:17.638030   19328 cli_runner.go:168] Completed: docker exec -t no-preload-20210310204947-6496 dig +short host.docker.internal: (2.0786077s)
	I0310 20:51:17.638030   19328 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:51:17.652465   19328 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:51:17.677980   19328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:51:17.778452   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:51:18.441603   19328 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\client.crt
	I0310 20:51:18.451198   19328 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\client.key
	I0310 20:51:18.455955   19328 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 20:51:18.461972   19328 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:51:18.903022   19328 docker.go:423] Got preloaded images: 
	I0310 20:51:18.903022   19328 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.5-rc.0 wasn't preloaded
	I0310 20:51:18.903022   19328 cache_images.go:76] LoadImages start: [k8s.gcr.io/kube-proxy:v1.20.5-rc.0 k8s.gcr.io/kube-scheduler:v1.20.5-rc.0 k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0 k8s.gcr.io/kube-apiserver:v1.20.5-rc.0 k8s.gcr.io/coredns:1.7.0 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/pause:3.2 gcr.io/k8s-minikube/storage-provisioner:v4 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0310 20:51:18.956187   19328 image.go:168] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0310 20:51:18.956687   19328 image.go:168] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0310 20:51:18.956507   19328 image.go:168] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0310 20:51:18.976984   19328 image.go:168] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 20:51:19.009814   19328 image.go:168] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	I0310 20:51:19.024332   19328 image.go:176] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v4: Error response from daemon: reference does not exist
	I0310 20:51:19.055729   19328 image.go:176] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0: Error response from daemon: reference does not exist
	I0310 20:51:19.065325   19328 image.go:176] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: reference does not exist
	I0310 20:51:19.065970   19328 image.go:176] daemon lookup for k8s.gcr.io/etcd:3.4.13-0: Error response from daemon: reference does not exist
	I0310 20:51:19.087207   19328 image.go:176] daemon lookup for k8s.gcr.io/coredns:1.7.0: Error response from daemon: reference does not exist
	I0310 20:51:19.106914   19328 image.go:168] retrieving image: k8s.gcr.io/pause:3.2
	I0310 20:51:19.119421   19328 image.go:168] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	I0310 20:51:19.129543   19328 image.go:168] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0310 20:51:19.144439   19328 image.go:176] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
	I0310 20:51:19.169985   19328 image.go:168] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	I0310 20:51:19.215620   19328 image.go:176] daemon lookup for docker.io/kubernetesui/dashboard:v2.1.0: Error response from daemon: reference does not exist
	W0310 20:51:19.230877   19328 image.go:185] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:51:19.231810   19328 image.go:168] retrieving image: k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	I0310 20:51:19.261298   19328 image.go:176] daemon lookup for k8s.gcr.io/kube-scheduler:v1.20.5-rc.0: Error response from daemon: reference does not exist
	W0310 20:51:19.267815   19328 image.go:185] authn lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:51:19.283244   19328 image.go:185] authn lookup for k8s.gcr.io/etcd:3.4.13-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:51:19.284219   19328 image.go:185] authn lookup for k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:51:19.299822   19328 image.go:176] daemon lookup for k8s.gcr.io/kube-apiserver:v1.20.5-rc.0: Error response from daemon: reference does not exist
	W0310 20:51:19.303801   19328 image.go:185] authn lookup for k8s.gcr.io/coredns:1.7.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:51:19.320930   19328 image.go:176] daemon lookup for k8s.gcr.io/kube-proxy:v1.20.5-rc.0: Error response from daemon: reference does not exist
	W0310 20:51:19.368787   19328 image.go:185] authn lookup for k8s.gcr.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:51:19.423122   19328 image.go:185] authn lookup for docker.io/kubernetesui/dashboard:v2.1.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 20:51:19.430326   19328 image.go:185] authn lookup for k8s.gcr.io/kube-scheduler:v1.20.5-rc.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:51:19.471929   19328 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	I0310 20:51:19.473862   19328 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.7.0
	W0310 20:51:19.482541   19328 image.go:185] authn lookup for k8s.gcr.io/kube-apiserver:v1.20.5-rc.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:51:19.512873   19328 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-0
	W0310 20:51:19.518403   19328 image.go:185] authn lookup for k8s.gcr.io/kube-proxy:v1.20.5-rc.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 20:51:19.537171   19328 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.2
	I0310 20:51:19.544248   19328 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 20:51:19.615087   19328 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	I0310 20:51:19.655844   19328 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	I0310 20:51:19.670920   19328 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0310 20:51:19.702098   19328 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	I0310 20:51:19.710515   19328 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0310 20:51:22.850648   19328 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.7.0: (3.3767905s)
	I0310 20:51:22.850948   19328 cache_images.go:104] "k8s.gcr.io/coredns:1.7.0" needs transfer: "k8s.gcr.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0310 20:51:22.850948   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns:1.7.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0
	I0310 20:51:22.850948   19328 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0
	I0310 20:51:22.855134   19328 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0: (3.383209s)
	I0310 20:51:22.855134   19328 cache_images.go:104] "k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0" does not exist at hash "a95b4e4b41d897ccc7ee9ef8f8180e433b8c31d6cb10f15636aed4828cd5ba57" in container runtime
	I0310 20:51:22.855134   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0
	I0310 20:51:22.855134   19328 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0
	I0310 20:51:22.875884   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.7.0
	I0310 20:51:22.916832   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.20.5-rc.0
	I0310 20:51:23.187595   19328 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v4: (3.6433514s)
	I0310 20:51:23.187595   19328 cache_images.go:104] "gcr.io/k8s-minikube/storage-provisioner:v4" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v4" does not exist at hash "85069258b98ac4e9f9fbd51dfba3b4212d8cd1d79df7d2ecff44b1319ed641cb" in container runtime
	I0310 20:51:23.187595   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4
	I0310 20:51:23.187595   19328 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4
	I0310 20:51:23.198955   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v4
	I0310 20:51:23.485311   19328 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.2: (3.9481458s)
	I0310 20:51:23.485476   19328 cache_images.go:104] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0310 20:51:23.485476   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2
	I0310 20:51:23.485476   19328 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2
	I0310 20:51:23.485653   19328 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-0: (3.9727851s)
	I0310 20:51:23.485892   19328 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.20.5-rc.0: (3.8297735s)
	I0310 20:51:23.485892   19328 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.20.5-rc.0: (3.8708105s)
	I0310 20:51:23.486095   19328 cache_images.go:104] "k8s.gcr.io/kube-scheduler:v1.20.5-rc.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.20.5-rc.0" does not exist at hash "4968524da75599d1baf6f4a0bba8c2c141eb155b41a59376d8e50ce08560f044" in container runtime
	I0310 20:51:23.486095   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0
	I0310 20:51:23.486095   19328 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0
	I0310 20:51:23.486095   19328 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4: (3.8149583s)
	I0310 20:51:23.486095   19328 cache_images.go:104] "k8s.gcr.io/etcd:3.4.13-0" needs transfer: "k8s.gcr.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0310 20:51:23.486095   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd:3.4.13-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0
	I0310 20:51:23.486787   19328 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0
	I0310 20:51:23.486095   19328 cache_images.go:104] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0310 20:51:23.487056   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0310 20:51:23.486095   19328 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0: (3.7755843s)
	I0310 20:51:23.485892   19328 cache_images.go:104] "k8s.gcr.io/kube-apiserver:v1.20.5-rc.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.20.5-rc.0" does not exist at hash "17a1e6e90a9b41ffaf452e5ce3c1df9229811707ce980a57d2d9a44853a2839c" in container runtime
	I0310 20:51:23.486327   19328 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.20.5-rc.0: (3.7842335s)
	I0310 20:51:23.487322   19328 cache_images.go:104] "k8s.gcr.io/kube-proxy:v1.20.5-rc.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.20.5-rc.0" does not exist at hash "455e87ddc011479e43f40c7335126e860f33f6bbd62f08ab90b048d8b52b8b76" in container runtime
	I0310 20:51:23.487322   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0
	I0310 20:51:23.487322   19328 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0
	I0310 20:51:23.487322   19328 cache_images.go:104] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0310 20:51:23.487474   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0310 20:51:23.487771   19328 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0310 20:51:23.486327   19328 ssh_runner.go:306] existence check for /var/lib/minikube/images/coredns_1.7.0: stat -c "%s %y" /var/lib/minikube/images/coredns_1.7.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.7.0': No such file or directory
	I0310 20:51:23.486523   19328 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.20.5-rc.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.20.5-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.20.5-rc.0': No such file or directory
	I0310 20:51:23.486787   19328 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v4: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v4': No such file or directory
	I0310 20:51:23.488028   19328 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0310 20:51:23.488028   19328 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0
	I0310 20:51:23.488028   19328 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0
	I0310 20:51:23.488397   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0 --> /var/lib/minikube/images/kube-controller-manager_v1.20.5-rc.0 (29539840 bytes)
	I0310 20:51:23.489143   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0 --> /var/lib/minikube/images/coredns_1.7.0 (13984256 bytes)
	I0310 20:51:23.489143   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 --> /var/lib/minikube/images/storage-provisioner_v4 (8882688 bytes)
	I0310 20:51:23.514108   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0310 20:51:23.514108   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0
	I0310 20:51:23.541955   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.20.5-rc.0
	I0310 20:51:23.560906   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.20.5-rc.0
	I0310 20:51:23.561290   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.20.5-rc.0
	I0310 20:51:23.561975   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.2
	I0310 20:51:23.566804   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.13-0
	I0310 20:51:24.182869   19328 ssh_runner.go:306] existence check for /var/lib/minikube/images/etcd_3.4.13-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.13-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.13-0': No such file or directory
	I0310 20:51:24.182869   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0 --> /var/lib/minikube/images/etcd_3.4.13-0 (86745600 bytes)
	I0310 20:51:24.187996   19328 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.20.5-rc.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.20.5-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.20.5-rc.0': No such file or directory
	I0310 20:51:24.188148   19328 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.20.5-rc.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.20.5-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.20.5-rc.0': No such file or directory
	I0310 20:51:24.188337   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0 --> /var/lib/minikube/images/kube-proxy_v1.20.5-rc.0 (49553920 bytes)
	I0310 20:51:24.187996   19328 ssh_runner.go:306] existence check for /var/lib/minikube/images/pause_3.2: stat -c "%s %y" /var/lib/minikube/images/pause_3.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory
	I0310 20:51:24.188569   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 --> /var/lib/minikube/images/pause_3.2 (301056 bytes)
	I0310 20:51:24.188337   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0 --> /var/lib/minikube/images/kube-apiserver_v1.20.5-rc.0 (30450688 bytes)
	I0310 20:51:24.213583   19328 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.20.5-rc.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.20.5-rc.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.20.5-rc.0': No such file or directory
	I0310 20:51:24.215029   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0 --> /var/lib/minikube/images/kube-scheduler_v1.20.5-rc.0 (14247936 bytes)
	I0310 20:51:24.227598   19328 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0310 20:51:24.228951   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (67993600 bytes)
	I0310 20:51:24.229253   19328 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0310 20:51:24.229456   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (16022528 bytes)
	W0310 20:51:25.061523   19328 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:51:25.061899   19328 retry.go:31] will retry after 276.165072ms: ssh: rejected: connect failed (open failed)
	W0310 20:51:25.062172   19328 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 20:51:25.062172   19328 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:51:25.062172   19328 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	W0310 20:51:25.062172   19328 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 20:51:25.062479   19328 retry.go:31] will retry after 234.428547ms: ssh: rejected: connect failed (open failed)
	I0310 20:51:25.062172   19328 retry.go:31] will retry after 291.140013ms: ssh: rejected: connect failed (open failed)
	I0310 20:51:25.301165   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:51:25.346005   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:51:25.361650   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:51:25.433315   19328 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 20:51:26.101302   19328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 20:51:26.102835   19328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 20:51:26.185627   19328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 20:51:26.208658   19328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55143 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 20:51:29.234723   19328 docker.go:167] Loading image: /var/lib/minikube/images/pause_3.2
	I0310 20:51:29.235517   19328 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/pause_3.2
	I0310 20:51:50.524305   19328 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/pause_3.2: (21.288516s)
	I0310 20:51:50.524489   19328 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 from cache
	I0310 20:51:50.524667   19328 docker.go:167] Loading image: /var/lib/minikube/images/kube-scheduler_v1.20.5-rc.0
	I0310 20:51:50.534197   19328 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-scheduler_v1.20.5-rc.0
	I0310 20:52:51.386614   19328 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/kube-scheduler_v1.20.5-rc.0: (1m0.8524964s)
	I0310 20:52:51.386614   19328 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0 from cache
	I0310 20:52:51.386614   19328 docker.go:167] Loading image: /var/lib/minikube/images/storage-provisioner_v4
	I0310 20:52:51.394638   19328 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/storage-provisioner_v4
	I0310 20:53:14.642635   19328 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/storage-provisioner_v4: (23.2480269s)
	I0310 20:53:14.642635   19328 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 from cache
	I0310 20:53:14.642980   19328 docker.go:167] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0310 20:53:14.652544   19328 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0310 20:53:44.688236   19328 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/metrics-scraper_v1.0.4: (30.0354915s)
	I0310 20:53:44.688529   19328 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 from cache
	I0310 20:53:44.688529   19328 docker.go:167] Loading image: /var/lib/minikube/images/coredns_1.7.0
	I0310 20:53:44.699188   19328 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/coredns_1.7.0
	I0310 20:54:25.591461   19328 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/coredns_1.7.0: (40.8920819s)
	I0310 20:54:25.592112   19328 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0 from cache
	I0310 20:54:25.592746   19328 docker.go:167] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
	I0310 20:54:25.608886   19328 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/dashboard_v2.1.0
	I0310 20:57:05.778978   19328 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/dashboard_v2.1.0: (2m40.1713613s)
	I0310 20:57:05.778978   19328 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 from cache
	I0310 20:57:05.779294   19328 docker.go:167] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.20.5-rc.0
	I0310 20:57:05.798519   19328 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.20.5-rc.0
	I0310 20:57:35.101758   19328 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.20.5-rc.0: (29.3030969s)
	I0310 20:57:35.101758   19328 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0 from cache
	I0310 20:57:35.101758   19328 docker.go:167] Loading image: /var/lib/minikube/images/kube-apiserver_v1.20.5-rc.0
	I0310 20:57:35.108732   19328 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-apiserver_v1.20.5-rc.0
	I0310 20:58:06.632029   19328 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/kube-apiserver_v1.20.5-rc.0: (31.5235027s)
	I0310 20:58:06.632346   19328 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0 from cache
	I0310 20:58:06.632346   19328 docker.go:167] Loading image: /var/lib/minikube/images/kube-proxy_v1.20.5-rc.0
	I0310 20:58:06.640761   19328 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-proxy_v1.20.5-rc.0
	I0310 20:59:05.115101   19328 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/kube-proxy_v1.20.5-rc.0: (58.4746723s)
	I0310 20:59:05.115972   19328 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0 from cache
	I0310 20:59:05.115972   19328 docker.go:167] Loading image: /var/lib/minikube/images/etcd_3.4.13-0
	I0310 20:59:05.123793   19328 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/etcd_3.4.13-0
	I0310 20:59:59.184767   19328 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/etcd_3.4.13-0: (54.0608382s)
	I0310 20:59:59.184986   19328 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0 from cache
	I0310 20:59:59.185254   19328 cache_images.go:111] Successfully loaded all cached images
	I0310 20:59:59.185254   19328 cache_images.go:80] LoadImages completed in 8m40.2852288s
	I0310 20:59:59.202433   19328 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 21:00:00.799151   19328 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (1.5967252s)
	I0310 21:00:00.800084   19328 cni.go:74] Creating CNI manager for ""
	I0310 21:00:00.800084   19328 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:00:00.800084   19328 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 21:00:00.800084   19328 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.7 APIServerPort:8443 KubernetesVersion:v1.20.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210310204947-6496 NodeName:no-preload-20210310204947-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.7 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 21:00:00.800639   19328 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "no-preload-20210310204947-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.5-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 21:00:00.800996   19328 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20210310204947-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 21:00:00.812567   19328 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.5-rc.0
	I0310 21:00:00.915109   19328 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.20.5-rc.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.20.5-rc.0': No such file or directory
	
	Initiating transfer...
	I0310 21:00:00.924861   19328 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.20.5-rc.0
	I0310 21:00:00.981350   19328 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.5-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.5-rc.0/bin/linux/amd64/kubectl.sha256
	I0310 21:00:00.981350   19328 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.5-rc.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.5-rc.0/bin/linux/amd64/kubelet.sha256
	I0310 21:00:00.982208   19328 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.5-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.5-rc.0/bin/linux/amd64/kubeadm.sha256
	I0310 21:00:00.999855   19328 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 21:00:01.058954   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl
	I0310 21:00:01.070450   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.5-rc.0/kubeadm
	I0310 21:00:01.171475   19328 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.20.5-rc.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.5-rc.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.20.5-rc.0/kubeadm': No such file or directory
	I0310 21:00:01.172363   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\linux\v1.20.5-rc.0/kubeadm --> /var/lib/minikube/binaries/v1.20.5-rc.0/kubeadm (39251968 bytes)
	I0310 21:00:01.182381   19328 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.20.5-rc.0/kubectl': No such file or directory
	I0310 21:00:01.182381   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\linux\v1.20.5-rc.0/kubectl --> /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl (40263680 bytes)
	I0310 21:00:01.201389   19328 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.5-rc.0/kubelet
	I0310 21:00:01.721042   19328 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.20.5-rc.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.5-rc.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.20.5-rc.0/kubelet': No such file or directory
	I0310 21:00:01.721523   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\linux\v1.20.5-rc.0/kubelet --> /var/lib/minikube/binaries/v1.20.5-rc.0/kubelet (114113512 bytes)
	I0310 21:00:27.426549   19328 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 21:00:27.483349   19328 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0310 21:00:27.693077   19328 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0310 21:00:27.872940   19328 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1861 bytes)
	I0310 21:00:28.025097   19328 ssh_runner.go:149] Run: grep 172.17.0.7	control-plane.minikube.internal$ /etc/hosts
	I0310 21:00:28.059999   19328 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:00:28.246689   19328 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496 for IP: 172.17.0.7
	I0310 21:00:28.247583   19328 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 21:00:28.248079   19328 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 21:00:28.248791   19328 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\client.key
	I0310 21:00:28.249210   19328 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.key.d9a465bc
	I0310 21:00:28.249466   19328 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.crt.d9a465bc with IP's: [172.17.0.7 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 21:00:28.418089   19328 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.crt.d9a465bc ...
	I0310 21:00:28.418089   19328 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.crt.d9a465bc: {Name:mk9f84dc2cdbc9c0999fc5b0a8e80f815defd5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:00:28.439973   19328 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.key.d9a465bc ...
	I0310 21:00:28.439973   19328 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.key.d9a465bc: {Name:mkc3fa1c2eabda15617766891caf5934d19423ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:00:28.457306   19328 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.crt.d9a465bc -> C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.crt
	I0310 21:00:28.474494   19328 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.key.d9a465bc -> C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.key
	I0310 21:00:28.481352   19328 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.key
	I0310 21:00:28.481965   19328 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.crt with IP's: []
	I0310 21:00:28.853973   19328 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.crt ...
	I0310 21:00:28.853973   19328 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.crt: {Name:mk1603a136e25b31f207cf28ffe9649fd6e5baf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:00:28.866988   19328 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.key ...
	I0310 21:00:28.866988   19328 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.key: {Name:mkc69566a232e654286eb817a053b4db6a8d23fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:00:28.880974   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 21:00:28.880974   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.881977   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 21:00:28.881977   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.881977   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 21:00:28.881977   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.881977   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 21:00:28.882968   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.882968   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 21:00:28.882968   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.882968   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 21:00:28.882968   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.883968   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 21:00:28.883968   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.883968   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 21:00:28.883968   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.883968   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 21:00:28.884972   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.884972   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 21:00:28.884972   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.884972   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 21:00:28.884972   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.884972   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 21:00:28.885970   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.885970   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 21:00:28.885970   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.885970   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 21:00:28.885970   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.886974   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 21:00:28.886974   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.886974   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 21:00:28.886974   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.886974   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 21:00:28.887974   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.887974   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 21:00:28.887974   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.887974   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 21:00:28.887974   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.888973   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 21:00:28.888973   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.888973   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 21:00:28.888973   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.888973   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 21:00:28.889973   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.889973   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 21:00:28.889973   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.889973   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 21:00:28.889973   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.890973   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 21:00:28.890973   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.890973   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 21:00:28.890973   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.890973   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 21:00:28.891974   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.891974   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 21:00:28.891974   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.891974   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 21:00:28.891974   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.892979   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 21:00:28.892979   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.892979   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 21:00:28.892979   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.892979   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 21:00:28.893994   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.893994   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 21:00:28.893994   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.893994   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 21:00:28.893994   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.893994   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 21:00:28.894971   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.894971   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 21:00:28.894971   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.894971   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 21:00:28.894971   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.894971   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 21:00:28.894971   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.895967   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 21:00:28.895967   19328 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 21:00:28.895967   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 21:00:28.895967   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 21:00:28.895967   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 21:00:28.896970   19328 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 21:00:28.908059   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 21:00:29.185732   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 21:00:29.410976   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 21:00:29.706235   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0310 21:00:29.966153   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 21:00:30.263393   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 21:00:30.495636   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 21:00:30.749583   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 21:00:30.935236   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 21:00:31.124742   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 21:00:31.342956   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 21:00:31.587618   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 21:00:31.781814   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 21:00:32.072563   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 21:00:32.396938   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 21:00:32.576718   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 21:00:32.855922   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 21:00:33.103998   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 21:00:33.388234   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 21:00:33.696421   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 21:00:33.980510   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 21:00:34.227826   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 21:00:34.541324   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 21:00:34.828903   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 21:00:35.109018   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 21:00:35.465602   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 21:00:36.050970   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 21:00:36.287594   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 21:00:36.560719   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 21:00:36.772452   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 21:00:36.983359   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 21:00:37.148779   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 21:00:37.350084   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 21:00:37.641007   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 21:00:37.903593   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 21:00:38.069343   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 21:00:38.400996   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 21:00:38.615082   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 21:00:38.844436   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 21:00:39.119032   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 21:00:39.387284   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 21:00:39.768400   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 21:00:39.994352   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 21:00:40.332245   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 21:00:40.585893   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 21:00:40.700595   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 21:00:40.894004   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 21:00:41.181684   19328 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 21:00:41.388074   19328 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 21:00:41.596310   19328 ssh_runner.go:149] Run: openssl version
	I0310 21:00:41.653746   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 21:00:41.787094   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 21:00:41.821502   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 21:00:41.832099   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 21:00:41.896758   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:41.973073   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 21:00:42.039911   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 21:00:42.074976   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 21:00:42.084020   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 21:00:42.135384   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:42.207999   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 21:00:42.289901   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 21:00:42.329873   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 21:00:42.360137   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 21:00:42.408967   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:42.514066   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 21:00:42.580812   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 21:00:42.620414   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 21:00:42.642175   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 21:00:42.707129   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:42.779425   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 21:00:42.864600   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 21:00:42.903846   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 21:00:42.918221   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 21:00:42.968499   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:43.024876   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 21:00:43.098532   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 21:00:43.126287   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 21:00:43.137084   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 21:00:43.189133   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:43.274360   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 21:00:43.390007   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 21:00:43.413026   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 21:00:43.424038   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 21:00:43.534055   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:43.637942   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 21:00:43.713341   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 21:00:43.751266   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 21:00:43.770225   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 21:00:43.812046   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:43.996599   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 21:00:44.238860   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 21:00:44.273378   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 21:00:44.284215   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 21:00:44.363497   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:44.455068   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 21:00:44.550880   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 21:00:44.605150   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 21:00:44.621600   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 21:00:44.664349   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:44.736636   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 21:00:44.809970   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 21:00:44.848933   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 21:00:44.858493   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 21:00:44.953860   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:45.023418   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 21:00:45.133523   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:00:45.159411   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:00:45.174254   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:00:45.223005   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 21:00:45.292330   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 21:00:45.364304   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 21:00:45.399356   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 21:00:45.409390   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 21:00:45.468353   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:45.539298   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 21:00:45.597931   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 21:00:45.624948   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 21:00:45.634629   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 21:00:45.703185   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:45.781028   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 21:00:45.865837   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 21:00:45.905285   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 21:00:45.915491   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 21:00:45.964070   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:46.064962   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 21:00:46.166762   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 21:00:46.189079   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 21:00:46.193634   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 21:00:46.248079   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:46.375059   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 21:00:46.448464   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 21:00:46.487322   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 21:00:46.497991   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 21:00:46.585922   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:46.690231   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 21:00:46.764549   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 21:00:46.791086   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 21:00:46.801109   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 21:00:46.888380   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:46.941875   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 21:00:47.021495   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 21:00:47.057391   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 21:00:47.067650   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 21:00:47.106539   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:47.198166   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 21:00:47.267672   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 21:00:47.301122   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 21:00:47.315982   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 21:00:47.366137   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:47.480979   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 21:00:47.584147   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 21:00:47.646736   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 21:00:47.655231   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 21:00:47.734971   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:47.837495   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 21:00:48.053515   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 21:00:48.137322   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 21:00:48.145879   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 21:00:48.259820   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:48.336139   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 21:00:48.444573   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 21:00:48.476605   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 21:00:48.486914   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 21:00:48.561547   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:48.675774   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 21:00:48.751811   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 21:00:48.786326   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 21:00:48.800394   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 21:00:48.844235   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:48.929870   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 21:00:49.325248   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 21:00:49.362405   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 21:00:49.372786   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 21:00:49.415615   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:49.493271   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 21:00:49.557457   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 21:00:49.590003   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 21:00:49.600374   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 21:00:49.703743   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:49.770221   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 21:00:49.886417   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 21:00:49.921837   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 21:00:49.932786   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 21:00:49.997465   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:50.093142   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 21:00:50.155176   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 21:00:50.181729   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 21:00:50.195167   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 21:00:50.246317   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:50.307218   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 21:00:50.396004   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 21:00:50.426252   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 21:00:50.436312   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 21:00:50.477896   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:50.555944   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 21:00:50.649606   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 21:00:50.677596   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 21:00:50.687729   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 21:00:50.803554   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:50.975398   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 21:00:51.077450   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 21:00:51.105886   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 21:00:51.115874   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 21:00:51.181077   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:51.264693   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 21:00:51.359666   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 21:00:51.394930   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 21:00:51.416445   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 21:00:51.481755   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:51.557050   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 21:00:51.657135   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 21:00:51.702497   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 21:00:51.711164   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 21:00:51.794468   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:51.859710   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 21:00:51.934347   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 21:00:51.974884   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 21:00:51.995554   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 21:00:52.045354   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:52.203181   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 21:00:52.288166   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 21:00:52.314216   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 21:00:52.323444   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 21:00:52.373845   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:52.535953   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 21:00:52.597904   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 21:00:52.634978   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 21:00:52.647136   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 21:00:52.701931   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:52.771978   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 21:00:52.890706   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 21:00:52.924788   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 21:00:52.934537   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 21:00:52.973456   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:53.035430   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 21:00:53.103878   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 21:00:53.143939   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 21:00:53.152050   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 21:00:53.234870   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:53.354105   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 21:00:53.430893   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 21:00:53.502940   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 21:00:53.526072   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 21:00:53.622463   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:53.722595   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 21:00:53.796500   19328 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 21:00:53.821684   19328 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 21:00:53.834857   19328 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 21:00:53.880525   19328 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 21:00:53.947306   19328 kubeadm.go:385] StartCluster: {Name:no-preload-20210310204947-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.7 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:00:53.952677   19328 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:00:54.605006   19328 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 21:00:54.733104   19328 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 21:00:54.786567   19328 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:00:54.800289   19328 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:00:54.884815   19328 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:00:54.884815   19328 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:05:37.901350   19328 out.go:150]   - Generating certificates and keys ...
	I0310 21:05:37.904895   19328 out.go:150]   - Booting up control plane ...
	W0310 21:05:37.927756   19328 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-20210310204947-6496] and IPs [172.17.0.7 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-20210310204947-6496] and IPs [172.17.0.7 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-20210310204947-6496] and IPs [172.17.0.7 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-20210310204947-6496] and IPs [172.17.0.7 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0310 21:05:37.928323   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0310 21:06:50.658207   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m12.7300306s)
	I0310 21:06:50.667357   19328 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0310 21:06:50.817097   19328 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:06:51.233974   19328 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:06:51.259484   19328 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:06:51.489867   19328 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:06:51.489867   19328 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:07:08.398779   19328 out.go:150]   - Generating certificates and keys ...
	I0310 21:07:16.692418   19328 out.go:150]   - Booting up control plane ...
	I0310 21:11:18.589405   19328 kubeadm.go:387] StartCluster complete in 10m24.6434922s
	I0310 21:11:18.597639   19328 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 21:11:28.031683   19328 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (9.434058s)
	I0310 21:11:28.031683   19328 logs.go:255] 1 containers: [ba5aace99e81]
	I0310 21:11:28.041044   19328 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 21:11:42.173398   19328 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (14.1320409s)
	I0310 21:11:42.173398   19328 logs.go:255] 1 containers: [81a39b1bd4f1]
	I0310 21:11:42.186226   19328 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 21:11:53.163432   19328 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (10.9769494s)
	I0310 21:11:53.163680   19328 logs.go:255] 0 containers: []
	W0310 21:11:53.163680   19328 logs.go:257] No container was found matching "coredns"
	I0310 21:11:53.173416   19328 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 21:11:59.415223   19328 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (6.2418161s)
	I0310 21:11:59.416266   19328 logs.go:255] 1 containers: [e63ae4a86183]
	I0310 21:11:59.430843   19328 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 21:12:03.107315   19328 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (3.6751065s)
	I0310 21:12:03.107315   19328 logs.go:255] 0 containers: []
	W0310 21:12:03.107315   19328 logs.go:257] No container was found matching "kube-proxy"
	I0310 21:12:03.121351   19328 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 21:12:07.346278   19328 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (4.2249334s)
	I0310 21:12:07.347153   19328 logs.go:255] 0 containers: []
	W0310 21:12:07.347153   19328 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 21:12:07.356812   19328 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 21:12:13.496680   19328 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (6.1398761s)
	I0310 21:12:13.497325   19328 logs.go:255] 0 containers: []
	W0310 21:12:13.497325   19328 logs.go:257] No container was found matching "storage-provisioner"
	I0310 21:12:13.505367   19328 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 21:12:23.947962   19328 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (10.4426095s)
	I0310 21:12:23.948964   19328 logs.go:255] 2 containers: [f4f5dad286f7 5e2289334650]
	I0310 21:12:23.948964   19328 logs.go:122] Gathering logs for dmesg ...
	I0310 21:12:23.948964   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 21:12:25.895568   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (1.9466062s)
	I0310 21:12:25.898560   19328 logs.go:122] Gathering logs for describe nodes ...
	I0310 21:12:25.898560   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 21:13:42.865353   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m16.9669011s)
	I0310 21:13:42.868563   19328 logs.go:122] Gathering logs for kube-apiserver [ba5aace99e81] ...
	I0310 21:13:42.868563   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 ba5aace99e81"
	I0310 21:14:02.672034   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 ba5aace99e81": (19.8034991s)
	I0310 21:14:02.704779   19328 logs.go:122] Gathering logs for container status ...
	I0310 21:14:02.704779   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 21:14:15.768923   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (13.0641632s)
	I0310 21:14:15.769623   19328 logs.go:122] Gathering logs for kubelet ...
	I0310 21:14:15.769623   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 21:14:19.601863   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (3.8322451s)
	I0310 21:14:19.666624   19328 logs.go:122] Gathering logs for etcd [81a39b1bd4f1] ...
	I0310 21:14:19.666624   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 81a39b1bd4f1"
	I0310 21:14:36.971030   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 81a39b1bd4f1": (17.3044298s)
	I0310 21:14:37.005997   19328 logs.go:122] Gathering logs for kube-scheduler [e63ae4a86183] ...
	I0310 21:14:37.007012   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 e63ae4a86183"
	I0310 21:14:52.195255   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 e63ae4a86183": (15.1882638s)
	I0310 21:14:52.215689   19328 logs.go:122] Gathering logs for kube-controller-manager [f4f5dad286f7] ...
	I0310 21:14:52.215689   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f4f5dad286f7"
	I0310 21:15:03.509080   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 f4f5dad286f7": (11.293407s)
	I0310 21:15:03.514067   19328 logs.go:122] Gathering logs for kube-controller-manager [5e2289334650] ...
	I0310 21:15:03.514067   19328 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 5e2289334650"
	I0310 21:15:19.403060   19328 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 5e2289334650": (15.889015s)
	I0310 21:15:19.424701   19328 logs.go:122] Gathering logs for Docker ...
	I0310 21:15:19.424701   19328 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 21:15:21.427318   19328 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (2.0026197s)
	W0310 21:15:21.432957   19328 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0310 21:15:21.432957   19328 out.go:191] * 
	* 
	W0310 21:15:21.433321   19328 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 21:15:21.442634   19328 out.go:191] * 
	* 
	W0310 21:15:21.442634   19328 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue:
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	W0310 21:15:21.442634   19328 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:15:21.458169   19328 out.go:129] 
	W0310 21:15:21.458169   19328 out.go:191] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 21:15:21.459378   19328 out.go:191] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0310 21:15:21.459378   19328 out.go:191] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0310 21:15:21.467206   19328 out.go:129] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:157: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-20210310204947-6496 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.20.5-rc.0": exit status 109
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect no-preload-20210310204947-6496
helpers_test.go:231: (dbg) docker inspect no-preload-20210310204947-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966",
	        "Created": "2021-03-10T20:50:09.5134495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226707,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:50:18.2832035Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/hosts",
	        "LogPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966-json.log",
	        "Name": "/no-preload-20210310204947-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210310204947-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210310204947-6496",
	                "Source": "/var/lib/docker/volumes/no-preload-20210310204947-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210310204947-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210310204947-6496",
	                "name.minikube.sigs.k8s.io": "no-preload-20210310204947-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c6d01674efa3eecf9681de23d3865d233efc3221239cb41b2b4e0f3ba80281f5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55142"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55140"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c6d01674efa3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "8ddbefa5a1b53f48449ea00eb7709ab032429b796d5246894e3cd34e9259cc89",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.7",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:07",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "8ddbefa5a1b53f48449ea00eb7709ab032429b796d5246894e3cd34e9259cc89",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.7",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:07",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210310204947-6496 -n no-preload-20210310204947-6496
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210310204947-6496 -n no-preload-20210310204947-6496: exit status 4 (10.5430236s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:15:30.041580   13200 status.go:396] kubeconfig endpoint: extract IP: "no-preload-20210310204947-6496" does not appear in C:\Users\jenkins/.kube/config

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 4 (may be ok)
helpers_test.go:237: "no-preload-20210310204947-6496" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (1546.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (1106.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20210310205017-6496 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.20.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20210310205017-6496 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.20.2: exit status 80 (16m16.7822765s)

                                                
                                                
-- stdout --
	* [embed-certs-20210310205017-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	* Starting control plane node embed-certs-20210310205017-6496 in cluster embed-certs-20210310205017-6496
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:50:18.226892   12032 out.go:239] Setting OutFile to fd 1756 ...
	I0310 20:50:18.227882   12032 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:50:18.227882   12032 out.go:252] Setting ErrFile to fd 2864...
	I0310 20:50:18.227882   12032 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:50:18.248455   12032 out.go:246] Setting JSON to false
	I0310 20:50:18.256081   12032 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":34884,"bootTime":1615374534,"procs":190,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:50:18.256919   12032 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:50:18.267831   12032 out.go:129] * [embed-certs-20210310205017-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:50:18.271695   12032 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:50:18.280056   12032 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:50:18.960607   12032 docker.go:119] docker version: linux-20.10.2
	I0310 20:50:18.970955   12032 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:50:20.875113   12032 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.9040197s)
	I0310 20:50:20.878110   12032 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:151 OomKillDisable:true NGoroutines:186 SystemTime:2021-03-10 20:50:19.6373599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:50:20.883727   12032 out.go:129] * Using the docker driver based on user configuration
	I0310 20:50:20.886181   12032 start.go:276] selected driver: docker
	I0310 20:50:20.886181   12032 start.go:718] validating driver "docker" against <nil>
	I0310 20:50:20.886446   12032 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:50:22.149874   12032 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:50:23.340100   12032 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1902276s)
	I0310 20:50:23.341479   12032 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:185 OomKillDisable:true NGoroutines:253 SystemTime:2021-03-10 20:50:22.792803 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:50:23.341892   12032 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 20:50:23.342589   12032 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 20:50:23.342814   12032 cni.go:74] Creating CNI manager for ""
	I0310 20:50:23.342814   12032 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:50:23.342814   12032 start_flags.go:398] config:
	{Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:50:23.352554   12032 out.go:129] * Starting control plane node embed-certs-20210310205017-6496 in cluster embed-certs-20210310205017-6496
	I0310 20:50:24.179749   12032 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:50:24.179749   12032 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:50:24.179749   12032 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:50:24.180703   12032 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:50:24.180703   12032 cache.go:54] Caching tarball of preloaded images
	I0310 20:50:24.181376   12032 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 20:50:24.181376   12032 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 20:50:24.182180   12032 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\config.json ...
	I0310 20:50:24.182180   12032 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\config.json: {Name:mk27c21c5e86c69029b5c4cdb865b9df12f997ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:50:24.210672   12032 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:50:24.211508   12032 start.go:313] acquiring machines lock for embed-certs-20210310205017-6496: {Name:mk5deb5478a17b664131b4c3205eef748b11179e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:50:24.211802   12032 start.go:317] acquired machines lock for "embed-certs-20210310205017-6496" in 294.8??s
	I0310 20:50:24.211802   12032 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 20:50:24.212219   12032 start.go:126] createHost starting for "" (driver="docker")
	I0310 20:50:24.215582   12032 out.go:150] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0310 20:50:24.216631   12032 start.go:160] libmachine.API.Create for "embed-certs-20210310205017-6496" (driver="docker")
	I0310 20:50:24.216967   12032 client.go:168] LocalClient.Create starting
	I0310 20:50:24.216967   12032 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 20:50:24.218083   12032 main.go:121] libmachine: Decoding PEM data...
	I0310 20:50:24.218083   12032 main.go:121] libmachine: Parsing certificate...
	I0310 20:50:24.218906   12032 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 20:50:24.218906   12032 main.go:121] libmachine: Decoding PEM data...
	I0310 20:50:24.219338   12032 main.go:121] libmachine: Parsing certificate...
	I0310 20:50:24.247056   12032 cli_runner.go:115] Run: docker network inspect embed-certs-20210310205017-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 20:50:24.867884   12032 cli_runner.go:162] docker network inspect embed-certs-20210310205017-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 20:50:24.878005   12032 network_create.go:240] running [docker network inspect embed-certs-20210310205017-6496] to gather additional debugging logs...
	I0310 20:50:24.878005   12032 cli_runner.go:115] Run: docker network inspect embed-certs-20210310205017-6496
	W0310 20:50:25.620239   12032 cli_runner.go:162] docker network inspect embed-certs-20210310205017-6496 returned with exit code 1
	I0310 20:50:25.620846   12032 network_create.go:243] error running [docker network inspect embed-certs-20210310205017-6496]: docker network inspect embed-certs-20210310205017-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20210310205017-6496
	I0310 20:50:25.620846   12032 network_create.go:245] output of [docker network inspect embed-certs-20210310205017-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20210310205017-6496
	
	** /stderr **
	I0310 20:50:25.639812   12032 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:50:26.410553   12032 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 20:50:26.411257   12032 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: embed-certs-20210310205017-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 20:50:26.427732   12032 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20210310205017-6496
	I0310 20:50:27.934968   12032 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20210310205017-6496: (1.5062424s)
	I0310 20:50:27.934968   12032 kic.go:102] calculated static IP "192.168.49.97" for the "embed-certs-20210310205017-6496" container
	I0310 20:50:27.953127   12032 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 20:50:28.618538   12032 cli_runner.go:115] Run: docker volume create embed-certs-20210310205017-6496 --label name.minikube.sigs.k8s.io=embed-certs-20210310205017-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 20:50:29.270088   12032 oci.go:102] Successfully created a docker volume embed-certs-20210310205017-6496
	I0310 20:50:29.284513   12032 cli_runner.go:115] Run: docker run --rm --name embed-certs-20210310205017-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210310205017-6496 --entrypoint /usr/bin/test -v embed-certs-20210310205017-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 20:50:35.407143   12032 cli_runner.go:168] Completed: docker run --rm --name embed-certs-20210310205017-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210310205017-6496 --entrypoint /usr/bin/test -v embed-certs-20210310205017-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (6.1219994s)
	I0310 20:50:35.407351   12032 oci.go:106] Successfully prepared a docker volume embed-certs-20210310205017-6496
	I0310 20:50:35.407691   12032 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:50:35.408542   12032 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:50:35.408542   12032 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 20:50:35.424597   12032 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210310205017-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	I0310 20:50:35.425603   12032 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W0310 20:50:36.119288   12032 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210310205017-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 20:50:36.119288   12032 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210310205017-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 20:50:36.501149   12032 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0753082s)
	I0310 20:50:36.502640   12032 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:198 OomKillDisable:true NGoroutines:216 SystemTime:2021-03-10 20:50:36.0375952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:50:36.519338   12032 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 20:50:37.648636   12032 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1292997s)
	I0310 20:50:37.673822   12032 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20210310205017-6496 --name embed-certs-20210310205017-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210310205017-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20210310205017-6496 --network embed-certs-20210310205017-6496 --ip 192.168.49.97 --volume embed-certs-20210310205017-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 20:50:47.858586   12032 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20210310205017-6496 --name embed-certs-20210310205017-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210310205017-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20210310205017-6496 --network embed-certs-20210310205017-6496 --ip 192.168.49.97 --volume embed-certs-20210310205017-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (10.184391s)
	I0310 20:50:47.872906   12032 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Running}}
	I0310 20:50:48.480326   12032 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}
	I0310 20:50:49.202566   12032 cli_runner.go:115] Run: docker exec embed-certs-20210310205017-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 20:50:50.544065   12032 cli_runner.go:168] Completed: docker exec embed-certs-20210310205017-6496 stat /var/lib/dpkg/alternatives/iptables: (1.3415013s)
	I0310 20:50:50.544616   12032 oci.go:278] the created container "embed-certs-20210310205017-6496" has a running status.
	I0310 20:50:50.544616   12032 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa...
	I0310 20:50:50.780890   12032 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 20:50:52.655977   12032 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}
	I0310 20:50:53.320270   12032 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 20:50:53.320270   12032 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210310205017-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 20:50:54.549451   12032 kic_runner.go:124] Done: [docker exec --privileged embed-certs-20210310205017-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.2291824s)
	I0310 20:50:54.555933   12032 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa...
	I0310 20:50:56.615352   12032 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}
	I0310 20:50:57.961625   12032 cli_runner.go:168] Completed: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}: (1.3462755s)
	I0310 20:50:57.961625   12032 machine.go:88] provisioning docker machine ...
	I0310 20:50:57.961625   12032 ubuntu.go:169] provisioning hostname "embed-certs-20210310205017-6496"
	I0310 20:50:57.979652   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:50:58.656728   12032 main.go:121] libmachine: Using SSH client type: native
	I0310 20:50:58.666477   12032 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55148 <nil> <nil>}
	I0310 20:50:58.666477   12032 main.go:121] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210310205017-6496 && echo "embed-certs-20210310205017-6496" | sudo tee /etc/hostname
	I0310 20:50:58.675827   12032 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 20:51:03.010822   12032 main.go:121] libmachine: SSH cmd err, output: <nil>: embed-certs-20210310205017-6496
	
	I0310 20:51:03.017202   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:51:03.684821   12032 main.go:121] libmachine: Using SSH client type: native
	I0310 20:51:03.685757   12032 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55148 <nil> <nil>}
	I0310 20:51:03.686375   12032 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210310205017-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210310205017-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210310205017-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:51:04.744018   12032 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:51:04.744018   12032 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:51:04.744018   12032 ubuntu.go:177] setting up certificates
	I0310 20:51:04.744018   12032 provision.go:83] configureAuth start
	I0310 20:51:05.087824   12032 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210310205017-6496
	I0310 20:51:07.061340   12032 cli_runner.go:168] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210310205017-6496: (1.973212s)
	I0310 20:51:07.061794   12032 provision.go:137] copyHostCerts
	I0310 20:51:07.062323   12032 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:51:07.062323   12032 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:51:07.062851   12032 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:51:07.068607   12032 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:51:07.068607   12032 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:51:07.069316   12032 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:51:07.072333   12032 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:51:07.072333   12032 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:51:07.072333   12032 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:51:07.076341   12032 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.embed-certs-20210310205017-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210310205017-6496]
	I0310 20:51:07.443241   12032 provision.go:165] copyRemoteCerts
	I0310 20:51:07.452743   12032 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:51:07.460635   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:51:08.174126   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 20:51:08.753234   12032 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3004928s)
	I0310 20:51:08.753676   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:51:09.588879   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0310 20:51:10.114757   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 20:51:10.670031   12032 provision.go:86] duration metric: configureAuth took 5.9260213s
	I0310 20:51:10.670031   12032 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:51:10.677725   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:51:11.438915   12032 main.go:121] libmachine: Using SSH client type: native
	I0310 20:51:11.439295   12032 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55148 <nil> <nil>}
	I0310 20:51:11.439857   12032 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:51:12.692220   12032 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:51:12.692220   12032 ubuntu.go:71] root file system type: overlay
	I0310 20:51:12.693079   12032 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:51:12.703106   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:51:13.344728   12032 main.go:121] libmachine: Using SSH client type: native
	I0310 20:51:13.345339   12032 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55148 <nil> <nil>}
	I0310 20:51:13.345339   12032 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:51:14.069057   12032 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:51:14.076293   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:51:14.730254   12032 main.go:121] libmachine: Using SSH client type: native
	I0310 20:51:14.730890   12032 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55148 <nil> <nil>}
	I0310 20:51:14.731022   12032 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:52:01.854365   12032 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 20:51:14.020203000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 20:52:01.855015   12032 machine.go:91] provisioned docker machine in 1m3.893473s
	I0310 20:52:01.855015   12032 client.go:171] LocalClient.Create took 1m37.6381751s
	I0310 20:52:01.855243   12032 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20210310205017-6496" took 1m37.6387393s
	I0310 20:52:01.855498   12032 start.go:267] post-start starting for "embed-certs-20210310205017-6496" (driver="docker")
	I0310 20:52:01.855498   12032 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:52:01.873358   12032 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:52:01.887160   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:52:02.628313   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 20:52:03.569003   12032 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.694845s)
	I0310 20:52:03.578116   12032 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:52:03.675841   12032 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:52:03.676084   12032 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:52:03.676084   12032 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:52:03.676220   12032 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:52:03.676727   12032 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:52:03.677398   12032 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:52:03.683520   12032 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:52:03.686107   12032 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:52:03.695941   12032 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:52:03.898028   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:52:04.389879   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:52:05.063730   12032 start.go:270] post-start completed in 3.2012731s
	I0310 20:52:05.096956   12032 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210310205017-6496
	I0310 20:52:05.835229   12032 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\config.json ...
	I0310 20:52:05.861798   12032 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:52:05.870833   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:52:06.517418   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 20:52:07.268159   12032 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4063622s)
	I0310 20:52:07.268159   12032 start.go:129] duration metric: createHost completed in 1m43.0560733s
	I0310 20:52:07.268159   12032 start.go:80] releasing machines lock for "embed-certs-20210310205017-6496", held for 1m43.0564904s
	I0310 20:52:07.281102   12032 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210310205017-6496
	I0310 20:52:07.992210   12032 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:52:07.999232   12032 ssh_runner.go:149] Run: systemctl --version
	I0310 20:52:08.002299   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:52:08.007355   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:52:08.714111   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 20:52:08.770012   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 20:52:10.253380   12032 ssh_runner.go:189] Completed: systemctl --version: (2.2535407s)
	I0310 20:52:10.253380   12032 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.2608904s)
	I0310 20:52:10.263479   12032 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:52:10.429593   12032 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:52:10.707403   12032 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:52:10.718145   12032 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:52:10.958295   12032 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:52:11.287121   12032 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:52:11.536559   12032 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:52:14.523302   12032 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (2.9867468s)
	I0310 20:52:14.544297   12032 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 20:52:14.638355   12032 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:52:16.619237   12032 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (1.9804134s)
	I0310 20:52:16.623733   12032 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 20:52:16.632927   12032 cli_runner.go:115] Run: docker exec -t embed-certs-20210310205017-6496 dig +short host.docker.internal
	I0310 20:52:22.800234   12032 cli_runner.go:168] Completed: docker exec -t embed-certs-20210310205017-6496 dig +short host.docker.internal: (6.1673156s)
	I0310 20:52:22.800891   12032 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:52:22.812307   12032 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:52:22.849023   12032 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:52:22.945212   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 20:52:23.633312   12032 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\client.crt
	I0310 20:52:23.638667   12032 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\client.key
	I0310 20:52:23.642761   12032 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:52:23.643012   12032 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:52:23.650318   12032 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:52:24.117242   12032 docker.go:423] Got preloaded images: 
	I0310 20:52:24.117488   12032 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 20:52:24.127715   12032 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:52:24.204389   12032 ssh_runner.go:149] Run: which lz4
	I0310 20:52:24.251042   12032 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 20:52:24.332560   12032 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 20:52:24.333094   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 20:53:39.153021   12032 docker.go:388] Took 74.916184 seconds to copy over tarball
	I0310 20:53:39.164827   12032 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 20:55:01.902021   12032 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1m22.7374359s)
	I0310 20:55:01.902021   12032 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 20:55:05.441283   12032 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:55:05.648552   12032 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 20:55:06.197579   12032 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:55:08.267013   12032 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (2.069457s)
	I0310 20:55:08.282304   12032 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:55:19.480086   12032 ssh_runner.go:189] Completed: sudo systemctl restart docker: (11.1969094s)
	I0310 20:55:19.489543   12032 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:55:21.287036   12032 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.7970205s)
	I0310 20:55:21.287036   12032 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 20:55:21.287036   12032 cache_images.go:73] Images are preloaded, skipping loading
	I0310 20:55:21.297542   12032 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 20:55:24.250373   12032 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (2.9528628s)
	I0310 20:55:24.250683   12032 cni.go:74] Creating CNI manager for ""
	I0310 20:55:24.250915   12032 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:55:24.250915   12032 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 20:55:24.250915   12032 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.97 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210310205017-6496 NodeName:embed-certs-20210310205017-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.97 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 20:55:24.251904   12032 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20210310205017-6496"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 20:55:24.252484   12032 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20210310205017-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 20:55:24.274778   12032 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 20:55:24.450586   12032 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 20:55:24.461291   12032 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 20:55:24.642941   12032 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0310 20:55:24.909282   12032 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 20:55:25.191612   12032 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1866 bytes)
	I0310 20:55:25.519992   12032 ssh_runner.go:149] Run: grep 192.168.49.97	control-plane.minikube.internal$ /etc/hosts
	I0310 20:55:25.571956   12032 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:55:25.698239   12032 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496 for IP: 192.168.49.97
	I0310 20:55:25.699064   12032 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 20:55:25.699508   12032 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 20:55:25.700454   12032 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\client.key
	I0310 20:55:25.700454   12032 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key.b6188fac
	I0310 20:55:25.700454   12032 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.crt.b6188fac with IP's: [192.168.49.97 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 20:55:25.924574   12032 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.crt.b6188fac ...
	I0310 20:55:25.924574   12032 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.crt.b6188fac: {Name:mk10e6313ebd5ad1f95928b7d88470fb99345d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:55:25.931551   12032 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key.b6188fac ...
	I0310 20:55:25.931551   12032 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key.b6188fac: {Name:mka91419f9baf9e0146e1be1e5db7dbf687c592a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:55:25.954248   12032 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.crt.b6188fac -> C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.crt
	I0310 20:55:25.958762   12032 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key.b6188fac -> C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key
	I0310 20:55:25.962056   12032 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.key
	I0310 20:55:25.962056   12032 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.crt with IP's: []
	I0310 20:55:26.082984   12032 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.crt ...
	I0310 20:55:26.082984   12032 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.crt: {Name:mk3c838a9a18f87d9a072dfa5a8440ac30b5bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:55:26.102509   12032 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.key ...
	I0310 20:55:26.102509   12032 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.key: {Name:mk60557b818ed926f683e62859ba000554fb73bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:55:26.115522   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 20:55:26.115522   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.116512   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 20:55:26.116512   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.116512   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 20:55:26.116512   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.117509   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 20:55:26.117509   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.117509   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 20:55:26.117509   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.118498   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 20:55:26.118498   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.118498   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 20:55:26.118498   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.118498   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 20:55:26.119524   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.119524   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 20:55:26.119524   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.120523   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 20:55:26.120788   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.120788   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 20:55:26.121527   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.121527   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 20:55:26.122511   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.122511   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 20:55:26.122511   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.122511   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 20:55:26.123532   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.123532   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 20:55:26.124510   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.124510   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 20:55:26.124510   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.124510   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 20:55:26.125504   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.125504   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 20:55:26.126579   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.126579   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 20:55:26.126579   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.126579   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 20:55:26.127541   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.127541   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 20:55:26.127541   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.128586   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 20:55:26.128586   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.128586   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 20:55:26.129505   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.129505   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 20:55:26.130503   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.130503   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 20:55:26.130503   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.130503   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 20:55:26.131505   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.131505   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 20:55:26.131505   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.131505   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 20:55:26.132501   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.132501   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 20:55:26.132501   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.132501   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 20:55:26.132501   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.133521   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 20:55:26.133521   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.133521   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 20:55:26.133521   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.134536   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 20:55:26.134536   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.134536   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 20:55:26.134536   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.134536   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 20:55:26.135521   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.135521   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 20:55:26.135521   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.135521   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 20:55:26.135521   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.136519   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 20:55:26.136519   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.136519   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 20:55:26.137536   12032 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 20:55:26.137536   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 20:55:26.137536   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 20:55:26.137536   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 20:55:26.138500   12032 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 20:55:26.146926   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 20:55:26.661034   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0310 20:55:27.274808   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 20:55:27.900105   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0310 20:55:28.552556   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 20:55:29.172852   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 20:55:30.215659   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 20:55:30.977363   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 20:55:31.631214   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 20:55:32.131662   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 20:55:32.900937   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 20:55:33.498315   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 20:55:33.866932   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 20:55:34.179524   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 20:55:34.609274   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 20:55:35.183223   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 20:55:35.948730   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 20:55:36.873370   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 20:55:37.239065   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 20:55:37.756523   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 20:55:38.321453   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 20:55:39.008564   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 20:55:39.487689   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 20:55:39.889493   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 20:55:40.443317   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 20:55:41.025804   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 20:55:41.444447   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 20:55:41.980966   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 20:55:42.454162   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 20:55:43.188978   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 20:55:43.944466   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 20:55:44.447902   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 20:55:44.961803   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 20:55:45.401943   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 20:55:45.917402   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 20:55:46.537540   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 20:55:46.915603   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 20:55:47.360662   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 20:55:47.967343   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 20:55:48.440718   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 20:55:48.669364   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 20:55:49.031860   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 20:55:49.496183   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 20:55:49.835894   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 20:55:50.166393   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 20:55:50.598675   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 20:55:50.917690   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 20:55:51.275899   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 20:55:51.553273   12032 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 20:55:51.674212   12032 ssh_runner.go:149] Run: openssl version
	I0310 20:55:51.748034   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 20:55:51.845291   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 20:55:51.907419   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 20:55:51.916844   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 20:55:51.977017   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:52.125988   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 20:55:52.285002   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 20:55:52.338786   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 20:55:52.350218   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 20:55:52.488331   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:52.582213   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 20:55:52.683810   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 20:55:52.734920   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 20:55:52.738580   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 20:55:52.801603   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:52.954160   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 20:55:53.070792   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 20:55:53.111021   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 20:55:53.132866   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 20:55:53.192080   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:53.267225   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 20:55:53.376498   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 20:55:53.420124   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 20:55:53.430979   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 20:55:53.493197   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:53.599988   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 20:55:53.745130   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 20:55:53.817136   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 20:55:53.830880   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 20:55:53.902576   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:54.083463   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 20:55:54.219724   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 20:55:54.265621   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 20:55:54.266886   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 20:55:54.381128   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:54.501174   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 20:55:54.690740   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 20:55:54.750387   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 20:55:54.760820   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 20:55:54.881852   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:54.970997   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 20:55:55.055407   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 20:55:55.094941   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 20:55:55.101511   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 20:55:55.166368   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:55.248697   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 20:55:55.337750   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 20:55:55.408900   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 20:55:55.436703   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 20:55:55.570854   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:55.651606   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 20:55:55.720675   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 20:55:55.757508   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 20:55:55.770102   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 20:55:55.842397   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:55.914166   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 20:55:55.979664   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 20:55:56.011711   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 20:55:56.021977   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 20:55:56.106763   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:56.188707   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 20:55:56.373306   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 20:55:56.419826   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 20:55:56.430819   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 20:55:56.516680   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:56.600187   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 20:55:56.674407   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 20:55:56.701358   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 20:55:56.723056   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 20:55:56.848423   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:56.947638   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 20:55:57.010201   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 20:55:57.041047   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 20:55:57.050896   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 20:55:57.112390   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:57.184503   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 20:55:57.263107   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 20:55:57.303607   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 20:55:57.325642   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 20:55:57.371890   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:57.450243   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 20:55:57.570620   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 20:55:57.608546   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 20:55:57.629653   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 20:55:57.669985   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:57.758975   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 20:55:57.866960   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 20:55:57.888262   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 20:55:57.897896   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 20:55:57.953136   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:58.128928   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 20:55:58.195989   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 20:55:58.248921   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 20:55:58.254899   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 20:55:58.301066   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:58.403086   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 20:55:58.464358   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 20:55:58.499778   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 20:55:58.510064   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 20:55:58.599953   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:58.723849   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 20:55:58.842093   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 20:55:58.875505   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 20:55:58.886920   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 20:55:58.955324   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:59.074501   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 20:55:59.144971   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 20:55:59.192975   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 20:55:59.203589   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 20:55:59.258090   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:59.338255   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 20:55:59.493086   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 20:55:59.525225   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 20:55:59.536779   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 20:55:59.597256   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 20:55:59.659966   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 20:55:59.752715   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 20:55:59.803247   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 20:55:59.819341   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 20:55:59.902010   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:00.029598   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 20:56:00.217343   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 20:56:00.264279   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 20:56:00.276602   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 20:56:00.481032   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:00.588434   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 20:56:00.695555   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 20:56:00.744111   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 20:56:00.744111   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 20:56:00.819884   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:00.920197   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 20:56:01.039045   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 20:56:01.150391   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 20:56:01.161830   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 20:56:01.230618   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:01.354190   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 20:56:01.459785   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 20:56:01.585075   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 20:56:01.605961   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 20:56:01.686935   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:01.858788   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 20:56:02.117610   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:56:02.154327   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:56:02.162794   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:56:02.236642   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 20:56:02.377307   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 20:56:02.562301   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 20:56:02.607871   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 20:56:02.629681   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 20:56:02.776563   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:03.005969   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 20:56:03.267549   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 20:56:03.355388   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 20:56:03.367567   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 20:56:03.490867   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:03.769100   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 20:56:03.928418   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 20:56:04.009445   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 20:56:04.031648   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 20:56:04.094051   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:04.267391   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 20:56:04.515432   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 20:56:04.565861   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 20:56:04.584359   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 20:56:04.677738   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:04.931424   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 20:56:05.133018   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 20:56:05.177828   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 20:56:05.188618   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 20:56:05.326910   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:05.443355   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 20:56:05.661852   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 20:56:05.727863   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 20:56:05.737326   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 20:56:05.845083   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:05.962869   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 20:56:06.218343   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 20:56:06.295476   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 20:56:06.308145   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 20:56:06.397320   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:06.820292   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 20:56:06.998058   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 20:56:07.067971   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 20:56:07.077500   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 20:56:07.220754   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:07.368465   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 20:56:07.553442   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 20:56:07.605171   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 20:56:07.614306   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 20:56:07.705654   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:07.955180   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 20:56:08.135257   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 20:56:08.193427   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 20:56:08.204908   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 20:56:08.354199   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:08.633941   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 20:56:08.843096   12032 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 20:56:08.892232   12032 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 20:56:08.906048   12032 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 20:56:09.121097   12032 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 20:56:09.332917   12032 kubeadm.go:385] StartCluster: {Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:56:09.350593   12032 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:56:10.402787   12032 ssh_runner.go:189] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (1.0522039s)
	I0310 20:56:10.414368   12032 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 20:56:10.740161   12032 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:56:10.973832   12032 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:56:10.987737   12032 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:56:11.107634   12032 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:56:11.108027   12032 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:00:55.615068   12032 out.go:150]   - Generating certificates and keys ...
	I0310 21:00:55.621679   12032 out.go:150]   - Booting up control plane ...
	I0310 21:00:55.628017   12032 out.go:150]   - Configuring RBAC rules ...
	I0310 21:00:55.636829   12032 cni.go:74] Creating CNI manager for ""
	I0310 21:00:55.637909   12032 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:00:55.637909   12032 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0310 21:00:55.651013   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:00:55.652432   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=embed-certs-20210310205017-6496 minikube.k8s.io/updated_at=2021_03_10T21_00_55_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:00:57.642783   12032 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (2.0048818s)
	I0310 21:00:57.643813   12032 ops.go:34] apiserver oom_adj: -16
	I0310 21:01:09.161344   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=embed-certs-20210310205017-6496 minikube.k8s.io/updated_at=2021_03_10T21_00_55_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (13.5089625s)
	I0310 21:01:09.161344   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (13.510143s)
	I0310 21:01:09.176436   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:01:13.343299   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.1666405s)
	I0310 21:01:13.853113   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:01:17.611552   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.7572657s)
	I0310 21:01:17.861135   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:01:21.951784   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.0896113s)
	I0310 21:01:22.352495   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:01:24.642593   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.2901055s)
	I0310 21:01:24.858026   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:01:28.960600   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.1025878s)
	I0310 21:01:29.360190   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:01:37.549973   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (8.1898118s)
	I0310 21:01:37.855377   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:01:48.700761   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (10.8451434s)
	I0310 21:01:48.857570   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:02:02.162100   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (13.3039991s)
	I0310 21:02:02.356288   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:02:17.844403   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (15.4881644s)
	I0310 21:02:17.854740   12032 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:02:25.327313   12032 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (7.4725962s)
	I0310 21:02:25.328124   12032 kubeadm.go:995] duration metric: took 1m29.6901671s to wait for elevateKubeSystemPrivileges.
	E0310 21:02:25.328281   12032 kubeadm.go:270] unable to create cluster role binding, some addons might not work: ensure sa was created: timed out waiting for the condition
	I0310 21:02:25.328450   12032 kubeadm.go:387] StartCluster complete in 6m15.9977591s
	I0310 21:02:25.328629   12032 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:02:25.329521   12032 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	I0310 21:02:25.338633   12032 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:02:26.247269   12032 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210310205017-6496" rescaled to 1
	I0310 21:02:26.250402   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:02:26.250402   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:02:26.250633   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:02:26.250633   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:02:26.250633   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:02:26.251096   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:02:26.251096   12032 start.go:203] Will wait 6m0s for node up to 
	I0310 21:02:26.251733   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:02:26.251869   12032 addons.go:381] enableAddons start: toEnable=map[], additional=[]
	I0310 21:02:26.252857   12032 addons.go:58] Setting storage-provisioner=true in profile "embed-certs-20210310205017-6496"
	I0310 21:02:26.251869   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:02:26.252857   12032 addons.go:134] Setting addon storage-provisioner=true in "embed-certs-20210310205017-6496"
	W0310 21:02:26.252857   12032 addons.go:143] addon storage-provisioner should already be in state true
	I0310 21:02:26.251869   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:02:26.252857   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:02:26.253396   12032 addons.go:58] Setting default-storageclass=true in profile "embed-certs-20210310205017-6496"
	I0310 21:02:26.253396   12032 host.go:66] Checking if "embed-certs-20210310205017-6496" exists ...
	I0310 21:02:26.253396   12032 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210310205017-6496"
	I0310 21:02:26.253396   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:02:26.254165   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:02:26.255311   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:02:26.255311   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:02:26.257223   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:02:26.257223   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:02:26.253396   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:02:26.265095   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:02:26.265095   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:02:26.251385   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:02:26.267507   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:02:26.267906   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:02:26.273349   12032 out.go:129] * Verifying Kubernetes components...
	I0310 21:02:26.253396   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:02:26.269092   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:02:26.269092   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:02:26.269092   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:02:26.269092   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:02:26.269092   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:02:26.269092   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:02:26.269092   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:02:26.269092   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:02:26.269314   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:02:26.269314   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:02:26.253396   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:02:26.831957   12032 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 21:02:26.839822   12032 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}
	I0310 21:02:26.842770   12032 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}
	I0310 21:02:27.189377   12032 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.190593   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	I0310 21:02:27.191049   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 940.6498ms
	I0310 21:02:27.191049   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	I0310 21:02:27.357295   12032 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.357295   12032 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.359332   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	I0310 21:02:27.359332   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	I0310 21:02:27.360306   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.1096768s
	I0310 21:02:27.360306   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	I0310 21:02:27.360306   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.1096768s
	I0310 21:02:27.360306   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	I0310 21:02:27.363296   12032 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.364611   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	I0310 21:02:27.368485   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.1180866s
	I0310 21:02:27.368485   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	I0310 21:02:27.444067   12032 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.445425   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	I0310 21:02:27.445679   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.1903713s
	I0310 21:02:27.445679   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	I0310 21:02:27.465285   12032 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.466208   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	I0310 21:02:27.466476   12032 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.466476   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.2130844s
	I0310 21:02:27.466476   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	I0310 21:02:27.467102   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	I0310 21:02:27.467681   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.215816s
	I0310 21:02:27.468070   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	I0310 21:02:27.588139   12032 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.588484   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	I0310 21:02:27.588484   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.3373921s
	I0310 21:02:27.589089   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	I0310 21:02:27.594261   12032 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.595266   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	I0310 21:02:27.595266   12032 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.597067   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.3429128s
	I0310 21:02:27.597067   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	I0310 21:02:27.601439   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	I0310 21:02:27.646896   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.3818061s
	I0310 21:02:27.647261   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	I0310 21:02:27.757022   12032 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.757432   12032 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.757432   12032 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.758159   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	I0310 21:02:27.758506   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	I0310 21:02:27.758506   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	I0310 21:02:27.758506   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.5056534s
	I0310 21:02:27.758506   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	I0310 21:02:27.759060   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.5062077s
	I0310 21:02:27.759060   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	I0310 21:02:27.759628   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 1.509s
	I0310 21:02:27.759628   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	I0310 21:02:27.798455   12032 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.799530   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	I0310 21:02:27.803502   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 1.5481958s
	I0310 21:02:27.803502   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	I0310 21:02:27.805485   12032 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.808461   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	I0310 21:02:27.815465   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.5479627s
	I0310 21:02:27.815465   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	I0310 21:02:27.817463   12032 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.817463   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	I0310 21:02:27.817463   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 1.5523737s
	I0310 21:02:27.817463   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	I0310 21:02:27.968154   12032 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.969218   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	I0310 21:02:27.969953   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.6848908s
	I0310 21:02:27.969953   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	I0310 21:02:27.986653   12032 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.986879   12032 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:27.987305   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	I0310 21:02:27.987736   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.7198347s
	I0310 21:02:27.987305   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	I0310 21:02:27.987736   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	I0310 21:02:27.987736   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.7226466s
	I0310 21:02:27.987736   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	I0310 21:02:28.009591   12032 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.009591   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	I0310 21:02:28.010562   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.736473s
	I0310 21:02:28.010562   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	I0310 21:02:28.014939   12032 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.015583   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	I0310 21:02:28.015583   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.7583661s
	I0310 21:02:28.015583   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	I0310 21:02:28.025891   12032 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.027874   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	I0310 21:02:28.028503   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.7464054s
	I0310 21:02:28.028503   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	I0310 21:02:28.039889   12032 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.040316   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	I0310 21:02:28.040989   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.7837723s
	I0310 21:02:28.040989   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	I0310 21:02:28.045668   12032 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.046583   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	I0310 21:02:28.047043   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 1.7928837s
	I0310 21:02:28.047266   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	I0310 21:02:28.073461   12032 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.074019   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	I0310 21:02:28.074830   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.7916001s
	I0310 21:02:28.075188   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	I0310 21:02:28.077561   12032 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.077933   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	I0310 21:02:28.077933   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.8114561s
	I0310 21:02:28.077933   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	I0310 21:02:28.081679   12032 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.082622   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	I0310 21:02:28.082898   12032 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.083212   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.8012531s
	I0310 21:02:28.083212   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	I0310 21:02:28.083680   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	I0310 21:02:28.084973   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.8017432s
	I0310 21:02:28.084973   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	I0310 21:02:28.086481   12032 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.087061   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	I0310 21:02:28.087678   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.8061782s
	I0310 21:02:28.087678   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	I0310 21:02:28.087822   12032 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.088202   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	I0310 21:02:28.089343   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.8061129s
	I0310 21:02:28.089717   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	I0310 21:02:28.098545   12032 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.099340   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	I0310 21:02:28.100406   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 1.8247846s
	I0310 21:02:28.100584   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	I0310 21:02:28.100853   12032 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.100853   12032 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.101191   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	I0310 21:02:28.101473   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.8182429s
	I0310 21:02:28.101473   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	I0310 21:02:28.101473   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	I0310 21:02:28.101810   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.819214s
	I0310 21:02:28.102141   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	I0310 21:02:28.111052   12032 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:28.111511   12032 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	I0310 21:02:28.112620   12032 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.8359883s
	I0310 21:02:28.112620   12032 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	I0310 21:02:28.118464   12032 cache.go:73] Successfully saved all images to host disk.
	I0310 21:02:28.137075   12032 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}
	I0310 21:02:28.442286   12032 cli_runner.go:168] Completed: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}: (1.6020367s)
	I0310 21:02:28.460390   12032 cli_runner.go:168] Completed: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}: (1.6176248s)
	I0310 21:02:28.465043   12032 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 21:02:28.465404   12032 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 21:02:28.465576   12032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0310 21:02:28.474656   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:28.790074   12032 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:02:28.798074   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:29.066709   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:02:29.270355   12032 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (2.4381677s)
	I0310 21:02:29.284536   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:29.486901   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:02:29.992302   12032 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	I0310 21:02:29.992540   12032 pod_ready.go:59] waiting 6m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	I0310 21:02:30.861547   12032 addons.go:134] Setting addon default-storageclass=true in "embed-certs-20210310205017-6496"
	W0310 21:02:30.861547   12032 addons.go:143] addon default-storageclass should already be in state true
	I0310 21:02:30.861839   12032 host.go:66] Checking if "embed-certs-20210310205017-6496" exists ...
	I0310 21:02:30.876842   12032 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}
	I0310 21:02:31.482053   12032 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	I0310 21:02:31.482305   12032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0310 21:02:31.494382   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:32.065054   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:02:32.571097   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:34.277617   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:34.987462   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:35.491739   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:35.876563   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:36.309823   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:37.124338   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:38.800104   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:39.589742   12032 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 21:02:40.633735   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:42.955557   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:43.752521   12032 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0310 21:02:44.015004   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:44.698789   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:45.149569   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:45.782219   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:46.469361   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:47.190466   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:48.258222   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:49.209423   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:49.914135   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:50.388134   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:51.418784   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:52.064198   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:52.715704   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:55.907968   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:56.457812   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:56.901500   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:57.355209   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:57.999204   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:58.663569   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:59.359901   12032 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (30.5697817s)
	I0310 21:02:59.360059   12032 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:02:59.360207   12032 docker.go:429] minikube-local-cache-test:functional-20210212145109-352 wasn't preloaded
	I0310 21:02:59.360207   12032 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210225231842-5736 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210310191609-6496 minikube-local-cache-test:functional-20210107190945-8748 minikube-local-cache-test:functional-20210304184021-4052 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210119220838-6552 minikube-local-cache-test:function
al-20210128021318-232 minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210114204234-6692 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210308233820-5396 minikube-local-cache-test:functional-20210106002159-6856 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional-20210126212539-5172 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210304002630-1156 minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functional-20210303214129-4588 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210213143925-7440]
	I0310 21:02:59.474451   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692
	I0310 21:02:59.506624   12032 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	I0310 21:02:59.508299   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352
	I0310 21:02:59.520174   12032 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	I0310 21:02:59.535180   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172
	I0310 21:02:59.536197   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310083645-5040
	I0310 21:02:59.546183   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516
	I0310 21:02:59.557183   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464
	I0310 21:02:59.558182   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156
	I0310 21:02:59.563196   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120022529-1140
	I0310 21:02:59.566184   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440
	I0310 21:02:59.588307   12032 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	I0310 21:02:59.591308   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210224014800-800
	I0310 21:02:59.598311   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310191609-6496
	I0310 21:02:59.605307   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588
	I0310 21:02:59.614421   12032 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	I0310 21:02:59.620396   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:02:59.649440   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736
	I0310 21:02:59.650424   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552
	I0310 21:02:59.675308   12032 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	I0310 21:02:59.683133   12032 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	W0310 21:02:59.691414   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:02:59.691414   12032 retry.go:31] will retry after 276.165072ms: ssh: rejected: connect failed (open failed)
	W0310 21:02:59.692430   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:02:59.692430   12032 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	W0310 21:02:59.692430   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:02:59.692430   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:02:59.692430   12032 retry.go:31] will retry after 291.140013ms: ssh: rejected: connect failed (open failed)
	I0310 21:02:59.692430   12032 retry.go:31] will retry after 234.428547ms: ssh: rejected: connect failed (open failed)
	W0310 21:02:59.695618   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:02:59.696453   12032 retry.go:31] will retry after 231.159374ms: ssh: rejected: connect failed (open failed)
	I0310 21:02:59.697840   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944
	I0310 21:02:59.697840   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210128021318-232
	I0310 21:02:59.723479   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452
	I0310 21:02:59.723982   12032 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	I0310 21:02:59.725620   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:59.736612   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:59.739123   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:59.765625   12032 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	I0310 21:02:59.768634   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210301195830-5700
	I0310 21:02:59.779650   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	W0310 21:02:59.799606   12032 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:02:59.840637   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024
	I0310 21:02:59.841701   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396
	W0310 21:02:59.846749   12032 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:02:59.852830   12032 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	I0310 21:02:59.874722   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:59.876727   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:59.881724   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052
	I0310 21:02:59.881724   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056
	I0310 21:02:59.887725   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210123004019-5372
	I0310 21:02:59.909001   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:59.911184   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:59.918731   12032 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	I0310 21:02:59.921181   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:59.964807   12032 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	I0310 21:02:59.974241   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:59.978241   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:02:59.984277   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520
	W0310 21:02:59.989244   12032 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:02:59.991272   12032 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	I0310 21:03:00.002247   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.015248   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.017248   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.086144   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.088369   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920
	I0310 21:03:00.089964   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432
	I0310 21:03:00.114338   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.115915   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.118331   12032 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120214442-10992
	I0310 21:03:00.131325   12032 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	I0310 21:03:00.133337   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.150081   12032 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:00.150297   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	I0310 21:03:00.150297   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:03:00.150297   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:03:00.152814   12032 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	W0310 21:03:00.154128   12032 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:03:00.163463   12032 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:00.163463   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	I0310 21:03:00.163463   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:03:00.164108   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:03:00.189466   12032 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:00.189466   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	I0310 21:03:00.189710   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:03:00.189710   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	W0310 21:03:00.240431   12032 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:03:00.257692   12032 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:00.257692   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	I0310 21:03:00.257692   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:03:00.257692   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:03:00.318802   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:03:00.358281   12032 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:00.358281   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	I0310 21:03:00.358281   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:03:00.358281   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:03:00.361252   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:03:00.390240   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:03:00.479315   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:03:00.515306   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.541580   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.578162   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.586150   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:03:00.587159   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.605502   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:00.667192   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	W0310 21:03:00.854680   12032 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:03:00.908877   12032 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:03:00.999969   12032 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:00.999969   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	I0310 21:03:00.999969   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:03:00.999969   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:03:01.019488   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:03:01.046330   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:01.065833   12032 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:01.065833   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	I0310 21:03:01.065833   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:03:01.065833   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:03:01.097697   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:03:01.132674   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:01.583566   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.841478s)
	I0310 21:03:01.583566   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:01.647475   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.7727579s)
	I0310 21:03:01.648188   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:01.780033   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.04329s)
	I0310 21:03:01.780033   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:01.918190   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.9029477s)
	I0310 21:03:01.918190   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:01.921713   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.9044703s)
	I0310 21:03:01.921997   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:01.969307   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.9910721s)
	I0310 21:03:01.969768   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.073671   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.2940282s)
	I0310 21:03:02.073919   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.110522   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.99619s)
	I0310 21:03:02.110817   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.145929   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.4203161s)
	I0310 21:03:02.146942   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.179378   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.3021002s)
	I0310 21:03:02.179558   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.183610   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.6683089s)
	I0310 21:03:02.183855   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.189695   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.2681233s)
	I0310 21:03:02.189915   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.207405   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.2040174s)
	I0310 21:03:02.207585   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.278104   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.162195s)
	I0310 21:03:02.278508   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.314113   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.7269586s)
	I0310 21:03:02.314611   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.320843   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.7426856s)
	I0310 21:03:02.321021   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.322371   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.4111933s)
	I0310 21:03:02.322582   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.327038   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.7849122s)
	I0310 21:03:02.327378   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.339267   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.2531295s)
	I0310 21:03:02.339267   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.7337702s)
	I0310 21:03:02.340385   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.340385   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.367611   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.3933768s)
	I0310 21:03:02.367611   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.381407   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.2477543s)
	I0310 21:03:02.381604   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.419311   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.5096441s)
	I0310 21:03:02.419311   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.3729848s)
	I0310 21:03:02.419311   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.419673   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.593385   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.4607154s)
	I0310 21:03:02.594334   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:02.771582   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:04.091832   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:04.783791   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:05.211027   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:06.771624   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:07.991316   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:08.955941   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:09.366591   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.366591   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:03:09.366591   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:03:09.366591   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.366591   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:03:09.366591   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:03:09.367661   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.367661   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:03:09.367661   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:03:09.368150   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.368150   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:03:09.368150   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:03:09.368150   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.368150   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:03:09.368150   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:03:09.368150   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.368150   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.368150   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:03:09.368150   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:03:09.368150   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:03:09.368150   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:03:09.368150   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369066   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:03:09.369066   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:03:09.369273   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.370993   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369273   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369700   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369700   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.369700   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.368150   12032 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:09.371885   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:03:09.371885   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:03:09.372298   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:03:09.373027   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:03:09.373027   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:03:09.374874   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:03:09.376211   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:03:09.376211   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:03:09.377539   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:03:09.377539   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:03:09.378833   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:03:09.378833   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:03:09.380145   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:03:09.380145   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:03:09.381428   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:03:09.381428   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:03:09.381428   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:03:09.381428   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:03:09.381428   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:03:09.381428   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:03:09.381984   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:03:09.381984   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:03:09.381428   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:03:09.381428   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:03:09.381984   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:03:09.382443   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:03:09.382876   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:03:09.383763   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:03:09.382876   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:03:09.381428   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:03:09.390623   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:03:09.381984   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:03:09.390623   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:03:09.381984   12032 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:03:09.390623   12032 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:03:09.473314   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 21:03:09.498659   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 21:03:09.648933   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:09.761089   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:03:09.872134   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:03:09.903789   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:03:09.904798   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:03:09.904798   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:03:09.913581   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:03:09.965993   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:03:09.966349   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:09.967228   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:09.968097   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:09.996034   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.008048   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.020184   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.021959   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.021959   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.033477   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.038481   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:03:10.052826   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:03:10.062544   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.073703   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.112194   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:03:10.129452   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 21:03:10.132851   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.132851   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:03:10.151940   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.167688   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.191682   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:03:10.195928   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:03:10.232836   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.240825   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:03:10.241835   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.241835   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:03:10.242848   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:03:10.242848   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:03:10.242848   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:03:10.242848   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:03:10.242848   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:03:10.242848   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:03:10.242848   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:03:10.254868   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:03:10.262228   12032 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:03:10.391060   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.405627   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.411577   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.415632   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.421101   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.423349   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.434637   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.435093   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.439988   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.440356   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:10.453458   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:11.711685   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.744599s)
	I0310 21:03:11.711685   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:11.719358   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.7515648s)
	I0310 21:03:11.719719   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:11.806779   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.8386875s)
	I0310 21:03:11.806779   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:11.866611   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.8695802s)
	I0310 21:03:11.866611   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:11.896721   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.4552705s)
	I0310 21:03:11.901608   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:11.964417   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.8313698s)
	I0310 21:03:11.965036   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.001067   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.5799705s)
	I0310 21:03:12.001067   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.017347   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.5822592s)
	I0310 21:03:12.017793   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.027520   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.0055662s)
	I0310 21:03:12.027803   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.033734   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.0115165s)
	I0310 21:03:12.033897   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.063635   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.0555928s)
	I0310 21:03:12.063635   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.155033   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.9866415s)
	I0310 21:03:12.155714   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.164706   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.741362s)
	I0310 21:03:12.164947   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.191469   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.7563814s)
	I0310 21:03:12.192080   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.234530   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.7945476s)
	I0310 21:03:12.235460   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.259197   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.239019s)
	I0310 21:03:12.259582   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.295703   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.8889499s)
	I0310 21:03:12.296376   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.299704   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.057875s)
	I0310 21:03:12.300110   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.303720   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.1517868s)
	I0310 21:03:12.304021   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.317976   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.2842966s)
	I0310 21:03:12.318290   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.329988   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.2674501s)
	I0310 21:03:12.330422   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.9390544s)
	I0310 21:03:12.331599   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.332114   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.345846   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.9342748s)
	I0310 21:03:12.346509   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.409432   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.9559798s)
	I0310 21:03:12.410376   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.430659   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.3565365s)
	I0310 21:03:12.430974   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.444979   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.0293525s)
	I0310 21:03:12.445464   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:12.457241   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.224159s)
	I0310 21:03:12.457679   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:13.637328   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	W0310 21:03:14.430029   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:14.430029   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:14.430029   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	W0310 21:03:14.430029   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:14.459786   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:16.303474   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.8436927s)
	I0310 21:03:16.303936   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	W0310 21:03:16.892272   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:03:16.892619   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:16.892619   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:16.893040   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	I0310 21:03:16.901613   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:17.136403   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:17.516982   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	W0310 21:03:18.431924   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:03:18.466904   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:03:18.492152   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:18.492152   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:18.493201   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	I0310 21:03:18.501887   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:19.132951   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:20.723486   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:21.995712   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:23.687494   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	W0310 21:03:23.892524   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:23.892524   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:23.892524   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588 (4096 bytes)
	W0310 21:03:23.995563   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:23.995868   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:23.995868   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040 (4096 bytes)
	I0310 21:03:24.104897   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:24.226405   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	W0310 21:03:24.745973   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:24.745973   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:24.745973   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396 (4096 bytes)
	I0310 21:03:24.799079   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:26.221935   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.1170444s)
	I0310 21:03:26.222905   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:26.385957   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (2.1592962s)
	I0310 21:03:26.386563   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:26.592405   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:26.753484   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.9544106s)
	I0310 21:03:26.754016   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:28.405374   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:29.621121   12032 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:03:31.909819   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:32.651336   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:32.651336   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:32.651336   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992 (4096 bytes)
	I0310 21:03:32.660138   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:33.295922   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:33.697037   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:35.427759   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:36.422717   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:36.422956   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:36.422956   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	I0310 21:03:36.424668   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:36.791696   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:37.094551   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	W0310 21:03:37.393496   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:37.393798   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:37.393798   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520 (4096 bytes)
	I0310 21:03:37.414603   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	W0310 21:03:37.781748   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:37.782186   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:37.792582   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	I0310 21:03:37.800898   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:38.040846   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:38.206743   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:38.470732   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:39.697923   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:42.106317   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:42.241529   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:42.241529   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:42.241529   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464 (4096 bytes)
	I0310 21:03:42.250056   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:03:42.849916   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:43.353604   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:45.586953   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:46.757714   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:48.312099   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:49.616060   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:50.644204   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:51.828915   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:52.333652   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:52.334501   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:52.335002   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	I0310 21:03:52.343258   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	W0310 21:03:52.669140   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:03:52.669355   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:53.007319   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:03:53.792965   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:54.855368   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:56.263521   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:57.916681   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:58.337641   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:03:58.352231   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:03:58.366597   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:03:58.608002   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:03:58.712712   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:59.281624   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:00.462737   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:02.011213   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:03.224568   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:04.673441   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:05.711965   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:07.146307   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:08.159125   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:09.901482   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:11.297649   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000cc4257}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:12.725373   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d957b7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:14.199370   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00087f717}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:15.886346   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0006c6357}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:17.142255   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00107ee07}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:18.632693   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0006190c7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:19.655810   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0013ebef7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:21.162895   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000cd7907}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:23.828149   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0006a7f37}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:25.375614   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0005893a7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:26.412101   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001eb1107}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:27.694373   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0013f2c67}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:28.727151   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001cb6a87}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:29.761421   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00157f637}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:30.843858   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00167c2a7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:32.185316   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0013ebe27}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:33.217815   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a03007}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:34.626249   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018f6ef7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:35.698571   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a1b917}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:37.327479   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000b4e9a7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:38.657310   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00035fcc7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:39.710246   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00160d327}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:40.799565   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001102b77}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:42.116085   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000cc5737}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:43.139837   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d8bbc7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:44.734084   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d7a997}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:46.135347   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001fcc8d7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:47.164061   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001c929b7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:48.183421   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a1ba77}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:49.687964   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d94377}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:51.152970   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00160cd57}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:52.166705   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000e66567}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:53.175086   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00087f447}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:54.621914   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000618aa7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:55.685576   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d7ddd7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:57.192434   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001c916d7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:58.215086   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001f02667}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:59.566506   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001f6e8d7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:00.660082   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018f6397}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:01.804768   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0015e6927}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:03.243175   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00035edc7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:04.714792   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001eb0b17}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:05.745767   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001236f07}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:06.772914   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001c12b77}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:07.887683   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00112f387}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:10.307723   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d4c6d7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:11.717255   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000782d67}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:13.144646   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d312b7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:14.174311   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001bda317}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:15.599964   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0006c7297}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:16.673525   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001c938b7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:17.708935   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000588ca7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:18.758156   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001405de7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:20.231765   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017ad7f7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:21.298204   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001eb1657}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:22.689800   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001103627}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:23.702400   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001cb40f7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:25.156008   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00198b207}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:26.228479   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d7afa7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:28.131994   12032 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:05:28.140629   12032 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:05:28.285053   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0013eabe7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:29.735547   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc002052427}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:31.104482   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc002060a17}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:32.565218   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001bf5c57}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:33.658143   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0005886c7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:35.158511   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000ecfba7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:36.359733   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017ac257}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:37.376526   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001cb7f37}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:38.740266   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00087e157}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:40.213266   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00198bcf7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:41.666581   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d8a5b7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:43.132985   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000cc54e7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:44.228071   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001e433e7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:45.676147   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0013ea9c7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:46.746860   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0006c72d7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:48.256725   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00035fb67}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:49.399104   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001fcdf07}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:50.651294   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001eb0557}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:51.729455   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00160dc17}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:53.716196   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00107e147}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:55.239233   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00167de47}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:57.071677   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001cdf867}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:58.312582   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001fc57b7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:59.810322   12032 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3m16.0551092s)
	I0310 21:05:59.811035   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172: (3m0.2762886s)
	I0310 21:05:59.811660   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692: (3m0.3376428s)
	I0310 21:05:59.811660   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352: (3m0.3035778s)
	I0310 21:05:59.811964   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516: (3m0.2659106s)
	I0310 21:05:59.811964   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310083645-5040: (3m0.2762005s)
	I0310 21:05:59.811964   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120022529-1140: (3m0.2492014s)
	I0310 21:05:59.812371   12032 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3m20.2178835s)
	I0310 21:05:59.812371   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156: (3m0.2546224s)
	I0310 21:05:59.812371   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464: (3m0.2556206s)
	I0310 21:05:59.812371   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440: (3m0.2466205s)
	I0310 21:05:59.812820   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210301195830-5700: (3m0.0446189s)
	I0310 21:05:59.813140   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310191609-6496: (3m0.215262s)
	I0310 21:05:59.813140   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520: (2m59.829295s)
	I0310 21:05:59.813792   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588: (3m0.2089181s)
	I0310 21:05:59.813140   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (2m59.4523189s)
	I0310 21:05:59.818285   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088': No such file or directory
	I0310 21:05:59.813792   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736: (3m0.1647848s)
	I0310 21:05:59.813792   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210224014800-800: (3m0.2229173s)
	I0310 21:05:59.818685   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	I0310 21:05:59.814144   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (2m49.5519672s)
	I0310 21:05:59.814144   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452: (3m0.0910971s)
	I0310 21:05:59.814144   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: (2m50.3156085s)
	I0310 21:05:59.814144   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: (2m50.053459s)
	I0310 21:05:59.814539   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552: (3m0.1645485s)
	I0310 21:05:59.814539   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920: (2m59.7266021s)
	I0310 21:05:59.814539   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: (2m50.3414135s)
	I0310 21:05:59.814978   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052: (2m59.933687s)
	I0310 21:05:59.814978   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (2m49.5725339s)
	I0310 21:05:59.814978   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (2m49.9115936s)
	I0310 21:05:59.815365   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944: (3m0.1179576s)
	I0310 21:05:59.815365   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024: (2m59.97516s)
	I0310 21:05:59.815642   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: (2m49.8500531s)
	I0310 21:05:59.815642   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (2m49.5731978s)
	I0310 21:05:59.815958   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: (2m49.6189834s)
	I0310 21:05:59.816211   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: (2m49.6837636s)
	I0310 21:05:59.816211   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: (2m59.3373266s)
	I0310 21:05:59.816461   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (2m59.2307415s)
	I0310 21:05:59.816461   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: (2m49.5740163s)
	I0310 21:05:59.816461   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: (2m58.7974021s)
	I0310 21:05:59.816461   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: (2m49.5740163s)
	I0310 21:05:59.816461   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (2m49.7640385s)
	I0310 21:05:59.816863   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210128021318-232: (3m0.1194561s)
	I0310 21:05:59.816863   12032 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396: (2m59.9755948s)
	I0310 21:05:59.816863   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (2m49.6878146s)
	I0310 21:05:59.816863   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: (2m49.9448879s)
	I0310 21:05:59.817258   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (2m49.5748135s)
	I0310 21:05:59.817258   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (2m49.5748135s)
	I0310 21:05:59.817258   12032 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (2m49.5768362s)
	I0310 21:05:59.819463   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552': No such file or directory
	I0310 21:05:59.819463   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352': No such file or directory
	I0310 21:05:59.819463   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856': No such file or directory
	I0310 21:05:59.820051   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736': No such file or directory
	I0310 21:05:59.820051   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056': No such file or directory
	I0310 21:05:59.820051   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156': No such file or directory
	I0310 21:05:59.820553   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052': No such file or directory
	I0310 21:05:59.822478   12032 out.go:129] * Enabled addons: default-storageclass, storage-provisioner
	I0310 21:05:59.822777   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440': No such file or directory
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552 (4096 bytes)
	I0310 21:05:59.823320   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800 (4096 bytes)
	I0310 21:05:59.822777   12032 addons.go:383] enableAddons completed in 3m33.5714395s
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700': No such file or directory
	I0310 21:05:59.822777   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172': No such file or directory
	I0310 21:05:59.822478   12032 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024': No such file or directory
	I0310 21:05:59.824064   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372 (4096 bytes)
	I0310 21:05:59.822777   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	I0310 21:05:59.823320   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	I0310 21:05:59.823320   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	I0310 21:05:59.825425   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	I0310 21:05:59.823320   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056 (4096 bytes)
	I0310 21:05:59.823320   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	I0310 21:05:59.824215   12032 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024 (4096 bytes)
	W0310 21:05:59.946313   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.946313   12032 retry.go:31] will retry after 296.705768ms: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.946313   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.946313   12032 retry.go:31] will retry after 141.409254ms: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.946904   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.946904   12032 retry.go:31] will retry after 164.129813ms: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.946904   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.946904   12032 retry.go:31] will retry after 149.242379ms: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.946904   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.946904   12032 retry.go:31] will retry after 200.227965ms: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.946904   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.946904   12032 retry.go:31] will retry after 253.803157ms: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.947252   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.947252   12032 retry.go:31] will retry after 328.409991ms: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.946904   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.947252   12032 retry.go:31] will retry after 178.565968ms: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.947252   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.947561   12032 retry.go:31] will retry after 220.164297ms: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.947561   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.947718   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.947718   12032 retry.go:31] will retry after 242.222461ms: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.947718   12032 retry.go:31] will retry after 204.514543ms: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.947718   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:05:59.947718   12032 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.947718   12032 retry.go:31] will retry after 198.275464ms: ssh: rejected: connect failed (open failed)
	I0310 21:05:59.947718   12032 retry.go:31] will retry after 195.758538ms: ssh: rejected: connect failed (open failed)
	I0310 21:06:00.024751   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018d76f7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:00.110052   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.111105   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.119934   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.134754   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.169533   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.175685   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.216441   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.216441   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.217734   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.218900   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.235273   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.269371   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:00.308659   12032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:06:01.142330   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.0075784s)
	I0310 21:06:01.142330   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.163314   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.053264s)
	I0310 21:06:01.164221   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.191882   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.0807799s)
	I0310 21:06:01.192784   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.220374   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.0444497s)
	I0310 21:06:01.220675   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.222681   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.1027496s)
	I0310 21:06:01.222885   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.305601   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.0863207s)
	I0310 21:06:01.305730   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.323945   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.1074089s)
	I0310 21:06:01.324457   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.330712   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.0954415s)
	I0310 21:06:01.331222   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.367960   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.1515216s)
	I0310 21:06:01.368271   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.390500   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.2209693s)
	I0310 21:06:01.391478   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.394700   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a1bc27}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:01.400531   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.1309s)
	I0310 21:06:01.400857   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.427550   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.2092463s)
	I0310 21:06:01.427550   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:01.453339   12032 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496: (1.1445574s)
	I0310 21:06:01.453799   12032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55148 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:06:02.702307   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0021858f7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:05.137862   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001fcd6c7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:06.367863   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0021368d7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:06:07.257674   12032 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:06:07.801729   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001faa9c7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:08.914566   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0019c89a7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:10.288131   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a1bef7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:11.787019   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00077fd97}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:12.792167   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001eb0167}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:14.250057   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000589da7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:16.779461   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0019ba3a7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:18.276981   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00057f867}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:19.754408   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00107f4d7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:21.536063   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001ba46e7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:22.741011   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a0f657}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:24.219641   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc002122a77}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:25.873474   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001cb75d7}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:32.857080   12032 pod_ready.go:102] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:09 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:31 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.97 PodIP: PodIPs:[] StartTime:2021-03-10 21:04:09 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Wait
ing:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0020b1347}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:06:33.354336   12032 pod_ready.go:62] duration metric: took 4m3.3623829s to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	I0310 21:06:33.354649   12032 pod_ready.go:59] waiting 6m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	I0310 21:06:33.597838   12032 pod_ready.go:97] pod "etcd-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:32 +0000 GMT Reason: Message:}
	I0310 21:06:33.597838   12032 pod_ready.go:62] duration metric: took 243.1897ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	I0310 21:06:33.597838   12032 pod_ready.go:59] waiting 6m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	I0310 21:06:34.097926   12032 pod_ready.go:97] pod "kube-apiserver-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:01:39 +0000 GMT Reason: Message:}
	I0310 21:06:34.097926   12032 pod_ready.go:62] duration metric: took 499.6147ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	I0310 21:06:34.097926   12032 pod_ready.go:59] waiting 6m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	I0310 21:06:34.270616   12032 pod_ready.go:97] pod "kube-controller-manager-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:21 +0000 GMT Reason: Message:}
	I0310 21:06:34.271612   12032 pod_ready.go:62] duration metric: took 173.6861ms to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	I0310 21:06:34.271612   12032 pod_ready.go:59] waiting 6m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	I0310 21:06:34.429146   12032 pod_ready.go:97] pod "kube-proxy-p6jnj" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:05:51 +0000 GMT Reason: Message:}
	I0310 21:06:34.429146   12032 pod_ready.go:62] duration metric: took 157.5344ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	I0310 21:06:34.429146   12032 pod_ready.go:59] waiting 6m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	I0310 21:06:34.514436   12032 pod_ready.go:97] pod "kube-scheduler-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:01:35 +0000 GMT Reason: Message:}
	I0310 21:06:34.514436   12032 pod_ready.go:62] duration metric: took 85.2909ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	I0310 21:06:34.514436   12032 pod_ready.go:39] duration metric: took 4m4.522486s for extra waiting for kube-system core pods to be Ready ...
	I0310 21:06:34.518300   12032 out.go:129] 
	W0310 21:06:34.518535   12032 out.go:191] X Exiting due to GUEST_START: wait 6m0s for node: extra waiting: "kube-dns": "wait pod Ready: timed out waiting for the condition"
	X Exiting due to GUEST_START: wait 6m0s for node: extra waiting: "kube-dns": "wait pod Ready: timed out waiting for the condition"
	W0310 21:06:34.518535   12032 out.go:191] * 
	* 
	W0310 21:06:34.518535   12032 out.go:191] * If the above advice does not help, please let us know: 
	* If the above advice does not help, please let us know: 
	W0310 21:06:34.519025   12032 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:06:34.524383   12032 out.go:129] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:157: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p embed-certs-20210310205017-6496 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.20.2": exit status 80
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect embed-certs-20210310205017-6496
helpers_test.go:231: (dbg) docker inspect embed-certs-20210310205017-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb",
	        "Created": "2021-03-10T20:50:38.3818436Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 228187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:50:47.7608175Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb/hosts",
	        "LogPath": "/var/lib/docker/containers/34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb/34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb-json.log",
	        "Name": "/embed-certs-20210310205017-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210310205017-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210310205017-6496",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dcfd102322cab94eedae4c6a78b3d5341d2b0fef2ffb51299de38c2755ea8f34-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcfd102322cab94eedae4c6a78b3d5341d2b0fef2ffb51299de38c2755ea8f34/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcfd102322cab94eedae4c6a78b3d5341d2b0fef2ffb51299de38c2755ea8f34/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcfd102322cab94eedae4c6a78b3d5341d2b0fef2ffb51299de38c2755ea8f34/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210310205017-6496",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210310205017-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210310205017-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210310205017-6496",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210310205017-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bf53fe65905934b243584b629c4cf7dd403cab00d217108ad97f972d6b14c8a9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55147"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55146"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55145"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bf53fe659059",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210310205017-6496": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.97"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "34279bb86a24",
	                        "embed-certs-20210310205017-6496"
	                    ],
	                    "NetworkID": "beda76989846335b2108542b8dbd47960c451d6da6c3d41d5b12bd1840f4b292",
	                    "EndpointID": "082e9aad9110ea9568946da7c53938f3dac2fdb1eecaf8f39981d3f46a5b0b4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:61",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210310205017-6496 -n embed-certs-20210310205017-6496
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210310205017-6496 -n embed-certs-20210310205017-6496: (28.1115905s)
helpers_test.go:240: <<< TestStartStop/group/embed-certs/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-20210310205017-6496 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p embed-certs-20210310205017-6496 logs -n 25: (1m28.3030344s)
helpers_test.go:248: TestStartStop/group/embed-certs/serial/FirstStart logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:50:54 UTC, end at Wed 2021-03-10 21:07:40 UTC. --
	* Mar 10 20:55:08 embed-certs-20210310205017-6496 dockerd[467]: time="2021-03-10T20:55:08.697702200Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	* Mar 10 20:55:08 embed-certs-20210310205017-6496 dockerd[467]: time="2021-03-10T20:55:08.710218800Z" level=info msg="Daemon shutdown complete"
	* Mar 10 20:55:09 embed-certs-20210310205017-6496 systemd[1]: docker.service: Succeeded.
	* Mar 10 20:55:09 embed-certs-20210310205017-6496 systemd[1]: Stopped Docker Application Container Engine.
	* Mar 10 20:55:09 embed-certs-20210310205017-6496 systemd[1]: Starting Docker Application Container Engine...
	* Mar 10 20:55:10 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:10.530246100Z" level=info msg="Starting up"
	* Mar 10 20:55:10 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:10.746410700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:55:10 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:10.746523900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:55:10 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:10.746599600Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:55:10 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:10.746642000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:55:10 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:10.847531500Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:55:10 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:10.849265000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:55:10 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:10.849488200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:55:10 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:10.849544600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:55:11 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:11.698729300Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 20:55:11 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:11.935610200Z" level=info msg="Loading containers: start."
	* Mar 10 20:55:16 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:16.444542000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 20:55:17 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:17.640412000Z" level=info msg="Loading containers: done."
	* Mar 10 20:55:18 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:18.831001400Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 20:55:18 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:18.833110800Z" level=info msg="Daemon has completed initialization"
	* Mar 10 20:55:19 embed-certs-20210310205017-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 20:55:19 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:19.934587400Z" level=info msg="API listen on [::]:2376"
	* Mar 10 20:55:20 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:55:20.118475600Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 20:58:54 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T20:58:54.831623000Z" level=info msg="ignoring event" container=82f094950f4078c11083c0c3804cac9794e32abd453cd603744b6a0f02975e7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:00:22 embed-certs-20210310205017-6496 dockerd[744]: time="2021-03-10T21:00:22.714894700Z" level=info msg="ignoring event" container=a293b95c5f11887372a1f1a4c693e0ad7e9a4f7e29dbb1ca0f1f189880ff8bff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 765eeaf3ce811       85069258b98ac       26 seconds ago      Running             storage-provisioner       0                   13e03f4b17759
	* 6402c6e4e6d47       bfe3a36ebd252       2 minutes ago       Running             coredns                   0                   233e14c5554ff
	* 4913edb022394       43154ddb57a83       2 minutes ago       Running             kube-proxy                0                   996876ed91c14
	* aae206460c764       a27166429d98e       6 minutes ago       Running             kube-controller-manager   3                   62844ce92fdb2
	* 78c1a80b774ce       a27166429d98e       6 minutes ago       Created             kube-controller-manager   2                   62844ce92fdb2
	* 55e5c1ff0487b       a8c2fdb8bf76e       9 minutes ago       Running             kube-apiserver            0                   6579ac6125a27
	* 4aeafe69b0263       ed2c44fbdd78b       9 minutes ago       Running             kube-scheduler            0                   208e864728a39
	* efd3086c1be70       0369cf4303ffd       10 minutes ago      Running             etcd                      0                   2f3e9943b2674
	* 
	* ==> coredns [6402c6e4e6d4] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210310205017-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=embed-certs-20210310205017-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=embed-certs-20210310205017-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T21_00_55_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 20:59:25 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  embed-certs-20210310205017-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 21:07:52 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 21:07:16 +0000   Wed, 10 Mar 2021 20:59:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 21:07:16 +0000   Wed, 10 Mar 2021 20:59:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 21:07:16 +0000   Wed, 10 Mar 2021 20:59:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 21:07:16 +0000   Wed, 10 Mar 2021 21:01:43 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.97
	*   Hostname:    embed-certs-20210310205017-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                788297b8-7aee-4d9f-9286-5da206103441
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-74ff55c5b-4w6mn                                    100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m27s
	*   kube-system                 etcd-embed-certs-20210310205017-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         8m3s
	*   kube-system                 kube-apiserver-embed-certs-20210310205017-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         7m18s
	*   kube-system                 kube-controller-manager-embed-certs-20210310205017-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         8m17s
	*   kube-system                 kube-proxy-p6jnj                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	*   kube-system                 kube-scheduler-embed-certs-20210310205017-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         7m18s
	*   kube-system                 storage-provisioner                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age    From        Message
	*   ----    ------                   ----   ----        -------
	*   Normal  Starting                 6m50s  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  6m45s  kubelet     Node embed-certs-20210310205017-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    6m45s  kubelet     Node embed-certs-20210310205017-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     6m45s  kubelet     Node embed-certs-20210310205017-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             6m42s  kubelet     Node embed-certs-20210310205017-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  6m27s  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                6m13s  kubelet     Node embed-certs-20210310205017-6496 status is now: NodeReady
	*   Normal  Starting                 2m11s  kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [efd3086c1be7] <==
	* 2021-03-10 21:06:27.450681 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (178.9762ms) to execute
	* 2021-03-10 21:06:32.355989 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (109.9333ms) to execute
	* 2021-03-10 21:06:33.002937 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.49.97\" " with result "range_response_count:0 size:5" took too long (111.1885ms) to execute
	* 2021-03-10 21:06:38.119290 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (262.5099ms) to execute
	* 2021-03-10 21:06:46.505962 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:06:48.110661 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:06:57.981112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:06:59.782992 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (364.6674ms) to execute
	* 2021-03-10 21:06:59.797066 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (267.9003ms) to execute
	* 2021-03-10 21:07:04.092068 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (218.955ms) to execute
	* 2021-03-10 21:07:06.340193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:07:08.013959 W | etcdserver: request "header:<ID:10490704451955657012 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.97\" mod_revision:557 > success:<request_put:<key:\"/registry/masterleases/192.168.49.97\" value_size:68 lease:1267332415100881202 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.97\" > >>" with result "size:16" took too long (118.4491ms) to execute
	* 2021-03-10 21:07:08.020136 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (644.1885ms) to execute
	* 2021-03-10 21:07:08.028440 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true " with result "range_response_count:0 size:7" took too long (717.4026ms) to execute
	* 2021-03-10 21:07:08.029166 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (133.1036ms) to execute
	* 2021-03-10 21:07:08.035828 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (321.1194ms) to execute
	* 2021-03-10 21:07:16.736075 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:07:17.375070 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:485" took too long (143.6291ms) to execute
	* 2021-03-10 21:07:26.831403 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:07:36.491415 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:07:39.895298 W | etcdserver: request "header:<ID:10490704451955657135 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:544 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3463 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (121.5005ms) to execute
	* 2021-03-10 21:07:47.230908 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1129" took too long (127.7915ms) to execute
	* 2021-03-10 21:07:47.235663 W | etcdserver: request "header:<ID:10490704451955657165 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.97\" mod_revision:577 > success:<request_put:<key:\"/registry/masterleases/192.168.49.97\" value_size:68 lease:1267332415100881354 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.97\" > >>" with result "size:16" took too long (131.674ms) to execute
	* 2021-03-10 21:07:47.442036 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:07:56.509073 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  21:08:02 up  2:08,  0 users,  load average: 143.95, 154.87, 142.67
	* Linux embed-certs-20210310205017-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [55e5c1ff0487] <==
	* Trace[1556407861]: [514.6012ms] [514.6012ms] END
	* I0310 21:06:37.792898       1 trace.go:205] Trace[2130730473]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:06:36.826) (total time: 965ms):
	* Trace[2130730473]: ---"Transaction prepared" 444ms (21:06:00.275)
	* Trace[2130730473]: ---"Transaction committed" 516ms (21:06:00.792)
	* Trace[2130730473]: [965.9674ms] [965.9674ms] END
	* I0310 21:06:43.127914       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:06:43.184613       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:06:43.184709       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 21:06:51.169126       1 trace.go:205] Trace[338138786]: "Patch" url:/api/v1/nodes/embed-certs-20210310205017-6496/status,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.49.97 (10-Mar-2021 21:06:50.011) (total time: 1114ms):
	* Trace[338138786]: ---"Recorded the audit event" 800ms (21:06:00.811)
	* Trace[338138786]: ---"Object stored in database" 194ms (21:06:00.057)
	* Trace[338138786]: [1.1147546s] [1.1147546s] END
	* I0310 21:07:08.064794       1 trace.go:205] Trace[43972570]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:07:07.032) (total time: 1031ms):
	* Trace[43972570]: ---"Transaction prepared" 139ms (21:07:00.262)
	* Trace[43972570]: ---"Transaction committed" 802ms (21:07:00.064)
	* Trace[43972570]: [1.0319477s] [1.0319477s] END
	* I0310 21:07:19.694781       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:07:19.694860       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:07:19.694874       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 21:07:47.488514       1 trace.go:205] Trace[473092524]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:07:46.903) (total time: 585ms):
	* Trace[473092524]: ---"Transaction committed" 537ms (21:07:00.488)
	* Trace[473092524]: [585.0063ms] [585.0063ms] END
	* I0310 21:08:02.108536       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:08:02.108679       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:08:02.108717       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [78c1a80b774c] <==
	* 
	* ==> kube-controller-manager [aae206460c76] <==
	* I0310 21:03:13.191325       1 shared_informer.go:247] Caches are synced for daemon sets 
	* I0310 21:03:13.223755       1 shared_informer.go:247] Caches are synced for GC 
	* I0310 21:03:13.255610       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 21:03:13.255654       1 disruption.go:339] Sending events to api server.
	* I0310 21:03:13.256004       1 shared_informer.go:247] Caches are synced for PVC protection 
	* I0310 21:03:13.259276       1 shared_informer.go:247] Caches are synced for attach detach 
	* I0310 21:03:13.259641       1 shared_informer.go:247] Caches are synced for job 
	* I0310 21:03:13.272132       1 shared_informer.go:247] Caches are synced for HPA 
	* I0310 21:03:13.272184       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I0310 21:03:13.285233       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	* I0310 21:03:13.336506       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* W0310 21:03:13.336689       1 node_lifecycle_controller.go:1044] Missing timestamp for Node embed-certs-20210310205017-6496. Assuming now as a timestamp.
	* I0310 21:03:13.368368       1 event.go:291] "Event occurred" object="embed-certs-20210310205017-6496" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20210310205017-6496 event: Registered Node embed-certs-20210310205017-6496 in Controller"
	* I0310 21:03:13.369537       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I0310 21:03:17.889485       1 range_allocator.go:373] Set node embed-certs-20210310205017-6496 PodCIDR to [10.244.0.0/24]
	* I0310 21:03:21.492995       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 21:03:22.396282       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:03:22.437917       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:03:22.437962       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 21:03:23.086492       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
	* E0310 21:03:23.788510       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	* E0310 21:03:23.884364       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	* I0310 21:03:27.652739       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-p6jnj"
	* I0310 21:03:29.949239       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-4w6mn"
	* E0310 21:03:31.610487       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	* 
	* ==> kube-proxy [4913edb02239] <==
	* I0310 21:05:44.319340       1 node.go:172] Successfully retrieved node IP: 192.168.49.97
	* I0310 21:05:44.322280       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.97), assume IPv4 operation
	* W0310 21:05:45.448045       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 21:05:45.448225       1 server_others.go:185] Using iptables Proxier.
	* I0310 21:05:45.577717       1 server.go:650] Version: v1.20.2
	* I0310 21:05:45.586431       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 21:05:45.627151       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 21:05:45.627674       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 21:05:45.689461       1 config.go:315] Starting service config controller
	* I0310 21:05:45.689665       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 21:05:45.633250       1 config.go:224] Starting endpoint slice config controller
	* I0310 21:05:45.711942       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 21:05:45.821612       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 21:05:45.905248       1 shared_informer.go:247] Caches are synced for service config 
	* I0310 21:06:02.480102       1 trace.go:205] Trace[991622159]: "iptables restore" (10-Mar-2021 21:06:00.067) (total time: 2412ms):
	* Trace[991622159]: [2.4121989s] [2.4121989s] END
	* 
	* ==> kube-scheduler [4aeafe69b026] <==
	* E0310 20:59:31.891969       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:59:32.343998       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:59:33.053826       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:59:33.165031       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:59:33.436233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:59:33.454530       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 20:59:37.786044       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:59:37.901901       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:59:38.028234       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 20:59:40.373965       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:59:40.780847       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 20:59:43.649222       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 20:59:43.649942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:59:43.969395       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 20:59:44.195476       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 20:59:46.649500       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 20:59:46.650935       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 20:59:46.651990       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 20:59:54.200631       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 20:59:57.968392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 20:59:57.977171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 20:59:59.305205       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:00:00.118932       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:00:12.549855       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I0310 21:00:39.525230       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:50:54 UTC, end at Wed 2021-03-10 21:08:25 UTC. --
	* Mar 10 21:03:21 embed-certs-20210310205017-6496 kubelet[3341]: Trace[1154826450]: [7.1723431s] [7.1723431s] END
	* Mar 10 21:03:22 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:03:22.399302    3341 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
	* Mar 10 21:03:24 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:03:24.513632    3341 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	* Mar 10 21:03:31 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:03:31.654421    3341 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 21:03:31 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:03:31.808712    3341 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b4673698-b2df-494d-8de6-1008fa8348af-xtables-lock") pod "kube-proxy-p6jnj" (UID: "b4673698-b2df-494d-8de6-1008fa8348af")
	* Mar 10 21:03:31 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:03:31.844314    3341 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b4673698-b2df-494d-8de6-1008fa8348af-kube-proxy") pod "kube-proxy-p6jnj" (UID: "b4673698-b2df-494d-8de6-1008fa8348af")
	* Mar 10 21:03:31 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:03:31.844399    3341 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b4673698-b2df-494d-8de6-1008fa8348af-lib-modules") pod "kube-proxy-p6jnj" (UID: "b4673698-b2df-494d-8de6-1008fa8348af")
	* Mar 10 21:03:31 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:03:31.844451    3341 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-nvcjt" (UniqueName: "kubernetes.io/secret/b4673698-b2df-494d-8de6-1008fa8348af-kube-proxy-token-nvcjt") pod "kube-proxy-p6jnj" (UID: "b4673698-b2df-494d-8de6-1008fa8348af")
	* Mar 10 21:03:46 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:03:46.563223    3341 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 21:03:47 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:03:47.389960    3341 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-r2228" (UniqueName: "kubernetes.io/secret/0b339996-09da-4e8b-82cb-967e22a2b12a-coredns-token-r2228") pod "coredns-74ff55c5b-4w6mn" (UID: "0b339996-09da-4e8b-82cb-967e22a2b12a")
	* Mar 10 21:03:47 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:03:47.391118    3341 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b339996-09da-4e8b-82cb-967e22a2b12a-config-volume") pod "coredns-74ff55c5b-4w6mn" (UID: "0b339996-09da-4e8b-82cb-967e22a2b12a")
	* Mar 10 21:04:17 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:04:17.436989    3341 trace.go:205] Trace[28570223]: "iptables Monitor CANARY check" (10-Mar-2021 21:04:10.504) (total time: 6932ms):
	* Mar 10 21:04:17 embed-certs-20210310205017-6496 kubelet[3341]: Trace[28570223]: [6.9323031s] [6.9323031s] END
	* Mar 10 21:05:03 embed-certs-20210310205017-6496 kubelet[3341]: W0310 21:05:03.696763    3341 pod_container_deletor.go:79] Container "996876ed91c140a9893652dce742e0862e082a88c5a0d696482a1acf58f5757f" not found in pod's containers
	* Mar 10 21:05:34 embed-certs-20210310205017-6496 kubelet[3341]: W0310 21:05:34.364971    3341 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-4w6mn through plugin: invalid network status for
	* Mar 10 21:05:37 embed-certs-20210310205017-6496 kubelet[3341]: W0310 21:05:37.774405    3341 pod_container_deletor.go:79] Container "233e14c5554ff268eb7ca0d7566127c65dab8f0579a758a320ac811ec576c8e1" not found in pod's containers
	* Mar 10 21:05:41 embed-certs-20210310205017-6496 kubelet[3341]: W0310 21:05:41.315212    3341 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-4w6mn through plugin: invalid network status for
	* Mar 10 21:05:51 embed-certs-20210310205017-6496 kubelet[3341]: W0310 21:05:51.325259    3341 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-4w6mn through plugin: invalid network status for
	* Mar 10 21:06:03 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:06:03.259753    3341 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 21:06:03 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:06:03.423421    3341 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-vq94d" (UniqueName: "kubernetes.io/secret/2659761d-6d3f-43ea-b1d9-04ec50811e6f-storage-provisioner-token-vq94d") pod "storage-provisioner" (UID: "2659761d-6d3f-43ea-b1d9-04ec50811e6f")
	* Mar 10 21:06:03 embed-certs-20210310205017-6496 kubelet[3341]: I0310 21:06:03.705453    3341 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/2659761d-6d3f-43ea-b1d9-04ec50811e6f-tmp") pod "storage-provisioner" (UID: "2659761d-6d3f-43ea-b1d9-04ec50811e6f")
	* Mar 10 21:06:26 embed-certs-20210310205017-6496 kubelet[3341]: W0310 21:06:26.945856    3341 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-4w6mn through plugin: invalid network status for
	* Mar 10 21:06:49 embed-certs-20210310205017-6496 kubelet[3341]: W0310 21:06:49.080809    3341 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 21:06:49 embed-certs-20210310205017-6496 kubelet[3341]: W0310 21:06:49.081158    3341 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 21:07:11 embed-certs-20210310205017-6496 kubelet[3341]: W0310 21:07:11.522227    3341 pod_container_deletor.go:79] Container "13e03f4b17759b7140d0f257f69fdaaa96e6d755a2381d915f56a68481084f75" not found in pod's containers
	* 
	* ==> storage-provisioner [765eeaf3ce81] <==
	* I0310 21:07:35.160768       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 21:07:37.346480       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 21:07:37.591131       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 21:07:38.971054       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 21:07:38.974891       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210310205017-6496_a341de06-e553-4b94-8067-b98e4114ac4d!
	* I0310 21:07:38.975098       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"31bb2ba0-7622-4a2f-8771-6a04779c1650", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210310205017-6496_a341de06-e553-4b94-8067-b98e4114ac4d became leader
	* I0310 21:07:40.075875       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210310205017-6496_a341de06-e553-4b94-8067-b98e4114ac4d!
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                    |                  Profile                  |          User           | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                        | scheduled-stop-20210310200905-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:52 GMT | Wed, 10 Mar 2021 20:11:54 GMT |
	|         | scheduled-stop-20210310200905-6496        |                                           |                         |         |                               |                               |
	|         | --schedule 5m                             |                                           |                         |         |                               |                               |
	| ssh     | -p                                        | scheduled-stop-20210310200905-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:11:57 GMT | Wed, 10 Mar 2021 20:11:59 GMT |
	|         | scheduled-stop-20210310200905-6496        |                                           |                         |         |                               |                               |
	|         | -- sudo systemctl show                    |                                           |                         |         |                               |                               |
	|         | minikube-scheduled-stop --no-page         |                                           |                         |         |                               |                               |
	| stop    | -p                                        | scheduled-stop-20210310200905-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:00 GMT | Wed, 10 Mar 2021 20:12:02 GMT |
	|         | scheduled-stop-20210310200905-6496        |                                           |                         |         |                               |                               |
	|         | --schedule 5s                             |                                           |                         |         |                               |                               |
	| delete  | -p                                        | scheduled-stop-20210310200905-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:26 GMT | Wed, 10 Mar 2021 20:12:35 GMT |
	|         | scheduled-stop-20210310200905-6496        |                                           |                         |         |                               |                               |
	| start   | -p                                        | skaffold-20210310201235-6496              | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:12:37 GMT | Wed, 10 Mar 2021 20:15:24 GMT |
	|         | skaffold-20210310201235-6496              |                                           |                         |         |                               |                               |
	|         | --memory=2600 --driver=docker             |                                           |                         |         |                               |                               |
	| -p      | skaffold-20210310201235-6496              | skaffold-20210310201235-6496              | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:15:28 GMT | Wed, 10 Mar 2021 20:15:41 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p                                        | skaffold-20210310201235-6496              | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:15:46 GMT | Wed, 10 Mar 2021 20:15:57 GMT |
	|         | skaffold-20210310201235-6496              |                                           |                         |         |                               |                               |
	| delete  | -p                                        | insufficient-storage-20210310201557-6496  | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:29 GMT | Wed, 10 Mar 2021 20:16:37 GMT |
	|         | insufficient-storage-20210310201557-6496  |                                           |                         |         |                               |                               |
	| delete  | -p pause-20210310201637-6496              | pause-20210310201637-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:24 GMT | Wed, 10 Mar 2021 20:32:49 GMT |
	| -p      | offline-docker-20210310201637-6496        | offline-docker-20210310201637-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:32:04 GMT | Wed, 10 Mar 2021 20:33:57 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p                                        | offline-docker-20210310201637-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:34:20 GMT | Wed, 10 Mar 2021 20:34:47 GMT |
	|         | offline-docker-20210310201637-6496        |                                           |                         |         |                               |                               |
	| stop    | -p                                        | kubernetes-upgrade-20210310201637-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:39:52 GMT | Wed, 10 Mar 2021 20:40:10 GMT |
	|         | kubernetes-upgrade-20210310201637-6496    |                                           |                         |         |                               |                               |
	| start   | -p nospam-20210310201637-6496             | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:38 GMT | Wed, 10 Mar 2021 20:40:39 GMT |
	|         | -n=1 --memory=2250                        |                                           |                         |         |                               |                               |
	|         | --wait=false --driver=docker              |                                           |                         |         |                               |                               |
	| -p      | nospam-20210310201637-6496                | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:41:42 GMT | Wed, 10 Mar 2021 20:44:25 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p nospam-20210310201637-6496             | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:44:37 GMT | Wed, 10 Mar 2021 20:44:59 GMT |
	| -p      | docker-flags-20210310201637-6496          | docker-flags-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:47:18 GMT | Wed, 10 Mar 2021 20:49:03 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p                                        | docker-flags-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:21 GMT | Wed, 10 Mar 2021 20:49:47 GMT |
	|         | docker-flags-20210310201637-6496          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | force-systemd-env-20210310201637-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:41 GMT | Wed, 10 Mar 2021 20:50:17 GMT |
	|         | force-systemd-env-20210310201637-6496     |                                           |                         |         |                               |                               |
	| -p      | cert-options-20210310203249-6496          | cert-options-20210310203249-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:50:36 GMT | Wed, 10 Mar 2021 20:50:43 GMT |
	|         | ssh openssl x509 -text -noout -in         |                                           |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt     |                                           |                         |         |                               |                               |
	| delete  | -p                                        | cert-options-20210310203249-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:10 GMT | Wed, 10 Mar 2021 20:51:56 GMT |
	|         | cert-options-20210310203249-6496          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | disable-driver-mounts-20210310205156-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:57 GMT | Wed, 10 Mar 2021 20:52:02 GMT |
	|         | disable-driver-mounts-20210310205156-6496 |                                           |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210310203447-6496    | force-systemd-flag-20210310203447-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:53:03 GMT | Wed, 10 Mar 2021 20:53:44 GMT |
	|         | ssh docker info --format                  |                                           |                         |         |                               |                               |
	|         |                          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | force-systemd-flag-20210310203447-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:54:07 GMT | Wed, 10 Mar 2021 20:54:36 GMT |
	|         | force-systemd-flag-20210310203447-6496    |                                           |                         |         |                               |                               |
	| stop    | -p                                        | old-k8s-version-20210310204459-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:19 GMT | Wed, 10 Mar 2021 21:02:40 GMT |
	|         | old-k8s-version-20210310204459-6496       |                                           |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                    |                                           |                         |         |                               |                               |
	| addons  | enable dashboard -p                       | old-k8s-version-20210310204459-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:42 GMT | Wed, 10 Mar 2021 21:02:42 GMT |
	|         | old-k8s-version-20210310204459-6496       |                                           |                         |         |                               |                               |
	|---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 21:07:48
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 21:07:48.002632    2000 out.go:239] Setting OutFile to fd 1652 ...
	* I0310 21:07:48.003642    2000 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:07:48.003642    2000 out.go:252] Setting ErrFile to fd 2836...
	* I0310 21:07:48.003642    2000 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:07:48.045069    2000 out.go:246] Setting JSON to false
	* I0310 21:07:48.054293    2000 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":35933,"bootTime":1615374535,"procs":117,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 21:07:48.055299    2000 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 21:07:48.060322    2000 out.go:129] * [running-upgrade-20210310201637-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 21:07:48.069818    2000 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 21:07:48.074735    2000 start_flags.go:453] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 21:07:48.078293    2000 out.go:129] * Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
	* I0310 21:07:48.078293    2000 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 21:07:48.643202    2000 docker.go:119] docker version: linux-20.10.2
	* I0310 21:07:48.656453    2000 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:07:49.793426    2000 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1362263s)
	* I0310 21:07:49.795031    2000 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:9 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:136 OomKillDisable:true NGoroutines:83 SystemTime:2021-03-10 21:07:49.3268937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://
index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:07:49.803161    2000 out.go:129] * Using the docker driver based on existing profile
	* I0310 21:07:49.803306    2000 start.go:276] selected driver: docker
	* I0310 21:07:49.803306    2000 start.go:718] validating driver "docker" against &{Name:running-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210310201637-6496 Namespace: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.8 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:07:49.803612    2000 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 21:07:51.828619    2000 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:07:52.849057    2000 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0204401s)
	* I0310 21:07:52.850280    2000 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:9 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:137 OomKillDisable:true NGoroutines:83 SystemTime:2021-03-10 21:07:52.377902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:07:52.851219    2000 start_flags.go:398] config:
	* {Name:running-upgrade-20210310201637-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210310201637-6496 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.8 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:07:52.855810    2000 out.go:129] * Starting control plane node running-upgrade-20210310201637-6496 in cluster running-upgrade-20210310201637-6496
	* I0310 21:07:53.515872    2000 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 21:07:53.516210    2000 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 21:07:53.516363    2000 preload.go:97] Checking if preload exists for k8s version v1.18.0 and runtime docker
	* W0310 21:07:53.631003    2000 preload.go:118] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.18.0-docker-overlay2-amd64.tar.lz4 status code: 404
	* I0310 21:07:53.631466    2000 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\running-upgrade-20210310201637-6496\config.json ...
	* I0310 21:07:53.631770    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	* I0310 21:07:53.631770    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0
	* I0310 21:07:53.632230    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0
	* I0310 21:07:53.632230    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2
	* I0310 21:07:53.632230    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0
	* I0310 21:07:53.632230    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	* I0310 21:07:53.632230    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0
	* I0310 21:07:53.632385    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0
	* I0310 21:07:53.631770    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns:1.6.7 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7
	* I0310 21:07:53.632533    2000 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4
	* I0310 21:07:53.663550    2000 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 21:07:53.665008    2000 start.go:313] acquiring machines lock for running-upgrade-20210310201637-6496: {Name:mkadafd569b31b7088ef8c9d5ae99a588890ad17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:53.666952    2000 start.go:317] acquired machines lock for "running-upgrade-20210310201637-6496" in 940.7??s
	* I0310 21:07:53.667190    2000 start.go:93] Skipping create...Using existing machine configuration
	* I0310 21:07:53.667427    2000 fix.go:55] fixHost starting: m01
	* I0310 21:07:53.790937    2000 cli_runner.go:115] Run: docker container inspect running-upgrade-20210310201637-6496 --format=
	* I0310 21:07:53.938838    2000 cache.go:93] acquiring lock: {Name:mkd1a3345075d89bd4426b52f164cc77480ec169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:53.939172    2000 cache.go:93] acquiring lock: {Name:mkeb51a7f4d902422b144c2acaf6602ffeeda50b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:53.939395    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0 exists
	* I0310 21:07:53.940764    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0 exists
	* I0310 21:07:53.940764    2000 cache.go:82] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.18.0" took 308.3796ms
	* I0310 21:07:53.940764    2000 cache.go:66] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.18.0 succeeded
	* I0310 21:07:53.940764    2000 cache.go:82] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.18.0" took 308.8341ms
	* I0310 21:07:53.940764    2000 cache.go:66] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.18.0 succeeded
	* I0310 21:07:53.944738    2000 cache.go:93] acquiring lock: {Name:mk95277aa1d8baa6ce693324ce93a259561b3b0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:53.945796    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 exists
	* I0310 21:07:53.946483    2000 cache.go:82] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.4" took 314.7137ms
	* I0310 21:07:53.946483    2000 cache.go:66] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 succeeded
	* I0310 21:07:53.964845    2000 cache.go:93] acquiring lock: {Name:mkaf6817d1570cac8e9e1902b52a9b2c5b9dc038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:53.965473    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0 exists
	* I0310 21:07:53.965723    2000 cache.go:82] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.4.3-0" took 322.6969ms
	* I0310 21:07:53.965723    2000 cache.go:66] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.3-0 succeeded
	* I0310 21:07:53.967608    2000 cache.go:93] acquiring lock: {Name:mk9c8fd7ef36525ddfab354a7672d9c092c5ea53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:53.968909    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0 exists
	* I0310 21:07:53.969529    2000 cache.go:82] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.18.0" took 325.3186ms
	* I0310 21:07:53.969768    2000 cache.go:66] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.18.0 succeeded
	* I0310 21:07:53.975185    2000 cache.go:93] acquiring lock: {Name:mkf95068147fb9802daffb44f03793cdfc94af80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:53.975929    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 exists
	* I0310 21:07:53.977097    2000 cache.go:82] cache image "gcr.io/k8s-minikube/storage-provisioner:v4" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v4" took 330.4643ms
	* I0310 21:07:53.977097    2000 cache.go:66] save to tar file gcr.io/k8s-minikube/storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 succeeded
	* I0310 21:07:53.996725    2000 cache.go:93] acquiring lock: {Name:mk33908c5692f6fbcea93524c073786bb1491be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:53.997836    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 exists
	* I0310 21:07:53.998135    2000 cache.go:82] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.1.0" took 354.8666ms
	* I0310 21:07:53.998135    2000 cache.go:66] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 succeeded
	* I0310 21:07:54.012434    2000 cache.go:93] acquiring lock: {Name:mk1bbd52b1d425b987a80d1b42ea65a1daa62351 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:54.012434    2000 cache.go:93] acquiring lock: {Name:mk962fa425f0feaabe16844bc3ad9ac4bf160641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:54.013384    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 exists
	* I0310 21:07:54.013384    2000 cache.go:93] acquiring lock: {Name:mkbaeca4a6ec180fb6b1238846e64ebdef3e8b1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:07:54.013384    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7 exists
	* I0310 21:07:54.013905    2000 cache.go:82] cache image "k8s.gcr.io/pause:3.2" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.2" took 369.962ms
	* I0310 21:07:54.013905    2000 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 succeeded
	* I0310 21:07:54.014130    2000 cache.go:82] cache image "k8s.gcr.io/coredns:1.6.7" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\coredns_1.6.7" took 368.4164ms
	* I0310 21:07:54.014130    2000 cache.go:66] save to tar file k8s.gcr.io/coredns:1.6.7 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.6.7 succeeded
	* I0310 21:07:54.014347    2000 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0 exists
	* I0310 21:07:54.015143    2000 cache.go:82] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.18.0" took 361.6762ms
	* I0310 21:07:54.015369    2000 cache.go:66] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.18.0 succeeded
	* I0310 21:07:54.015369    2000 cache.go:73] Successfully saved all images to host disk.
	* I0310 21:07:54.464290    2000 fix.go:108] recreateIfNeeded on running-upgrade-20210310201637-6496: state=Running err=<nil>
	* W0310 21:07:54.464290    2000 fix.go:134] unexpected machine state, will restart: <nil>
	* I0310 21:07:51.327046    9300 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (9.6255163s)
	* I0310 21:07:51.327046    9300 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 from cache
	* I0310 21:07:51.327330    9300 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:07:51.335845    9300 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:07:55.573811    9740 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 7fe915da4c34": (9.615186s)
	* I0310 21:07:55.593106    9740 logs.go:122] Gathering logs for container status ...
	* I0310 21:07:55.593106    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 21:07:54.469315    2000 out.go:129] * Updating the running docker "running-upgrade-20210310201637-6496" container ...
	* I0310 21:07:54.469595    2000 machine.go:88] provisioning docker machine ...
	* I0310 21:07:54.469734    2000 ubuntu.go:169] provisioning hostname "running-upgrade-20210310201637-6496"
	* I0310 21:07:54.477388    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	* I0310 21:07:55.117273    2000 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:07:55.117966    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	* I0310 21:07:55.118241    2000 main.go:121] libmachine: About to run SSH command:
	* sudo hostname running-upgrade-20210310201637-6496 && echo "running-upgrade-20210310201637-6496" | sudo tee /etc/hostname
	* I0310 21:07:54.585587   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format=: (6.9221834s)
	* I0310 21:07:54.586154   19212 logs.go:255] 1 containers: [cc170dc9a3a5]
	* I0310 21:07:54.600786   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 21:07:59.949909    9300 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: (8.6135695s)
	* I0310 21:07:59.949909    9300 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 from cache
	* I0310 21:07:59.949909    9300 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:07:59.963465    9300 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:07:59.199059    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (3.6052851s)
	* I0310 21:07:59.199059    9740 logs.go:122] Gathering logs for kubelet ...
	* I0310 21:07:59.199059    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	* I0310 21:08:00.953349    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.7542929s)
	* I0310 21:08:01.025695    9740 logs.go:122] Gathering logs for describe nodes ...
	* I0310 21:08:01.025695    9740 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	* I0310 21:07:58.349788    2000 main.go:121] libmachine: SSH cmd err, output: <nil>: running-upgrade-20210310201637-6496
	* 
	* I0310 21:07:58.358521    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	* I0310 21:07:58.959935    2000 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:07:58.960714    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	* I0310 21:07:58.960932    2000 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\srunning-upgrade-20210310201637-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20210310201637-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 running-upgrade-20210310201637-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:08:01.261113    2000 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:08:01.261113    2000 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:08:01.261113    2000 ubuntu.go:177] setting up certificates
	* I0310 21:08:01.261113    2000 provision.go:83] configureAuth start
	* I0310 21:08:01.269686    2000 cli_runner.go:115] Run: docker container inspect -f "" running-upgrade-20210310201637-6496
	* I0310 21:08:01.937346    2000 provision.go:137] copyHostCerts
	* I0310 21:08:01.937903    2000 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:08:01.937903    2000 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:08:01.938284    2000 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:08:01.942055    2000 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:08:01.942055    2000 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:08:01.942055    2000 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:08:01.945055    2000 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:08:01.945055    2000 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:08:01.945055    2000 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:08:01.948045    2000 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-20210310201637-6496 san=[172.17.0.8 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20210310201637-6496]
	* I0310 21:08:02.564805    2000 provision.go:165] copyRemoteCerts
	* I0310 21:08:02.576618    2000 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:08:02.586856    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	* I0310 21:07:59.376991   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format=: (4.7759747s)
	* I0310 21:07:59.376991   19212 logs.go:255] 1 containers: [6cc1ac0f0822]
	* I0310 21:07:59.383899   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 21:07:59.389297   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (11.7010907s)
	* I0310 21:07:59.389297   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 from cache
	* I0310 21:07:59.389619   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:07:59.399349   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:08:05.167179    9300 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: (5.2037235s)
	* I0310 21:08:05.167512    9300 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 from cache
	* I0310 21:08:05.167512    9300 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	* I0310 21:08:05.176190    9300 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	* I0310 21:08:03.197831    2000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55115 SSHKeyPath:C:\Users\jenkins\.minikube\machines\running-upgrade-20210310201637-6496\id_rsa Username:docker}
	* I0310 21:08:04.133763    2000 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.556872s)
	* I0310 21:08:04.134601    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 21:08:05.987049    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:08:07.273753    2000 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1277 bytes)
	* I0310 21:08:03.648102   11452 kubeadm.go:704] kubelet initialised
	* I0310 21:08:03.648102   11452 kubeadm.go:705] duration metric: took 1m14.8465949s waiting for restarted kubelet to initialise ...
	* I0310 21:08:03.648102   11452 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	* I0310 21:08:03.648315   11452 pod_ready.go:59] waiting 4m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	* I0310 21:08:10.136291    9300 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: (4.9601105s)
	* I0310 21:08:10.136548    9300 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 from cache
	* I0310 21:08:10.136548    9300 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:08:10.136548    9300 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:08:08.958484    2000 provision.go:86] duration metric: configureAuth took 7.6965458s
	* I0310 21:08:08.958484    2000 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:08:08.965919    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	* I0310 21:08:09.559131    2000 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:08:09.559679    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	* I0310 21:08:09.559807    2000 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:08:11.414005    2000 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:08:11.414302    2000 ubuntu.go:71] root file system type: overlay
	* I0310 21:08:11.414863    2000 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:08:11.425151    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	* I0310 21:08:12.066026    2000 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:08:12.066026    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	* I0310 21:08:12.066026    2000 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:08:11.061210   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: (11.6616371s)
	* I0310 21:08:11.061593   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=: (11.6777146s)
	* I0310 21:08:11.061593   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 from cache
	* I0310 21:08:11.061593   19212 logs.go:255] 0 containers: []
	* W0310 21:08:11.061593   19212 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 21:08:11.061913   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:08:11.070743   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=
	* I0310 21:08:11.075568   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:08:11.895581   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:12.406760   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:17.087865    2000 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:08:17.096629    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	* I0310 21:08:12.907281   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:13.412480   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:13.904011   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:14.404483   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:14.903398   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:15.406317   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:15.909015   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:16.412236   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:16.921185   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:17.409497   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:15.825332    9300 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: (5.6887936s)
	* I0310 21:08:15.825332    9300 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 from cache
	* I0310 21:08:15.825332    9300 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:08:15.841768    9300 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:08:17.710132    2000 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:08:17.710510    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	* I0310 21:08:17.710510    2000 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 21:08:17.732612   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format=: (6.6618809s)
	* I0310 21:08:17.732612   19212 logs.go:255] 1 containers: [af28c0367661]
	* I0310 21:08:17.742082   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format=
	* I0310 21:08:17.909917   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:18.410303   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:18.915535   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:19.403771   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:19.904436   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:20.401814   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:20.902610   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:21.403686   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:21.905329   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:22.403463   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:22.665856    9300 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: (6.8240997s)
	* I0310 21:08:22.665856    9300 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 from cache
	* I0310 21:08:22.666695    9300 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:08:22.677150    9300 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:08:22.903190   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:23.404222   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:23.906721   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:24.408282   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:24.906204   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:25.403002   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:25.904533   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:26.401140   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:26.913497   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:27.401895   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:30.002104    9740 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (28.9764581s)
	* I0310 21:08:30.004771    9740 logs.go:122] Gathering logs for kube-apiserver [93ec1b7fa7df] ...
	* I0310 21:08:30.004979    9740 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 93ec1b7fa7df"
	* I0310 21:08:28.504088   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (17.4282608s)
	* I0310 21:08:28.504088   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 from cache
	* I0310 21:08:28.504088   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	* I0310 21:08:28.504088   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format=: (10.7620249s)
	* I0310 21:08:28.504547   19212 logs.go:255] 2 containers: [bf37cfa32c85 92f2244695b6]
	* I0310 21:08:28.504547   19212 logs.go:122] Gathering logs for dmesg ...
	* I0310 21:08:28.504547   19212 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	* I0310 21:08:28.512444   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	* I0310 21:08:31.234883   19212 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (2.7303403s)
	* I0310 21:08:31.238386   19212 logs.go:122] Gathering logs for describe nodes ...
	* I0310 21:08:31.238386   19212 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	* I0310 21:08:27.911818   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:28.407413   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:28.902942   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:08:29.403210   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:08:01.849429   18088 out.go:340] unable to execute * 2021-03-10 21:07:08.013959 W | etcdserver: request "header:<ID:10490704451955657012 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.97\" mod_revision:557 > success:<request_put:<key:\"/registry/masterleases/192.168.49.97\" value_size:68 lease:1267332415100881202 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.97\" > >>" with result "size:16" took too long (118.4491ms) to execute
	: html/template:* 2021-03-10 21:07:08.013959 W | etcdserver: request "header:<ID:10490704451955657012 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.97\" mod_revision:557 > success:<request_put:<key:\"/registry/masterleases/192.168.49.97\" value_size:68 lease:1267332415100881202 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.97\" > >>" with result "size:16" took too long (118.4491ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:08:01.889361   18088 out.go:340] unable to execute * 2021-03-10 21:07:39.895298 W | etcdserver: request "header:<ID:10490704451955657135 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:544 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3463 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (121.5005ms) to execute
	: html/template:* 2021-03-10 21:07:39.895298 W | etcdserver: request "header:<ID:10490704451955657135 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:544 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3463 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (121.5005ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:08:01.911845   18088 out.go:340] unable to execute * 2021-03-10 21:07:47.235663 W | etcdserver: request "header:<ID:10490704451955657165 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.97\" mod_revision:577 > success:<request_put:<key:\"/registry/masterleases/192.168.49.97\" value_size:68 lease:1267332415100881354 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.97\" > >>" with result "size:16" took too long (131.674ms) to execute
	: html/template:* 2021-03-10 21:07:47.235663 W | etcdserver: request "header:<ID:10490704451955657165 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.97\" mod_revision:577 > success:<request_put:<key:\"/registry/masterleases/192.168.49.97\" value_size:68 lease:1267332415100881354 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.97\" > >>" with result "size:16" took too long (131.674ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:08:31.516133   18088 out.go:335] unable to parse "* I0310 21:07:48.656453    2000 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:07:48.656453    2000 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:08:31.525137   18088 out.go:335] unable to parse "* I0310 21:07:49.793426    2000 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.1362263s)\n": template: * I0310 21:07:49.793426    2000 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1362263s)
	:1: function "json" not defined - returning raw string.
	E0310 21:08:31.557214   18088 out.go:335] unable to parse "* I0310 21:07:51.828619    2000 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:07:51.828619    2000 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:08:31.564320   18088 out.go:335] unable to parse "* I0310 21:07:52.849057    2000 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0204401s)\n": template: * I0310 21:07:52.849057    2000 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0204401s)
	:1: function "json" not defined - returning raw string.
	E0310 21:08:31.891475   18088 out.go:340] unable to execute * I0310 21:07:54.477388    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	: template: * I0310 21:07:54.477388    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	:1:96: executing "* I0310 21:07:54.477388    2000 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" running-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:08:31.901774   18088 out.go:335] unable to parse "* I0310 21:07:55.117966    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}\n": template: * I0310 21:07:55.117966    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:08:31.998109   18088 out.go:340] unable to execute * I0310 21:07:58.358521    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	: template: * I0310 21:07:58.358521    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	:1:96: executing "* I0310 21:07:58.358521    2000 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" running-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:08:32.015976   18088 out.go:335] unable to parse "* I0310 21:07:58.960714    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}\n": template: * I0310 21:07:58.960714    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:08:32.115195   18088 out.go:340] unable to execute * I0310 21:08:02.586856    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	: template: * I0310 21:08:02.586856    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	:1:96: executing "* I0310 21:08:02.586856    2000 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" running-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:08:32.229945   18088 out.go:340] unable to execute * I0310 21:08:08.965919    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	: template: * I0310 21:08:08.965919    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	:1:96: executing "* I0310 21:08:08.965919    2000 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" running-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:08:32.240545   18088 out.go:335] unable to parse "* I0310 21:08:09.559679    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}\n": template: * I0310 21:08:09.559679    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:08:32.272194   18088 out.go:340] unable to execute * I0310 21:08:11.425151    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	: template: * I0310 21:08:11.425151    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	:1:96: executing "* I0310 21:08:11.425151    2000 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" running-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:08:32.283567   18088 out.go:335] unable to parse "* I0310 21:08:12.066026    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}\n": template: * I0310 21:08:12.066026    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:08:32.764227   18088 out.go:340] unable to execute * I0310 21:08:17.096629    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	: template: * I0310 21:08:17.096629    2000 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210310201637-6496
	:1:96: executing "* I0310 21:08:17.096629    2000 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" running-upgrade-20210310201637-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:08:32.837254   18088 out.go:335] unable to parse "* I0310 21:08:17.710510    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}\n": template: * I0310 21:08:17.710510    2000 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55115 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210310205017-6496 -n embed-certs-20210310205017-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210310205017-6496 -n embed-certs-20210310205017-6496: (9.3511817s)
helpers_test.go:257: (dbg) Run:  kubectl --context embed-certs-20210310205017-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context embed-certs-20210310205017-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context embed-certs-20210310205017-6496 describe pod : exit status 1 (243.9612ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context embed-certs-20210310205017-6496 describe pod : exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (1106.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (1473.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210310205202-6496 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.20.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210310205202-6496 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.20.2: exit status 80 (19m29.6685518s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20210310205202-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	* Starting control plane node default-k8s-different-port-20210310205202-6496 in cluster default-k8s-different-port-20210310205202-6496
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:52:03.114358   19212 out.go:239] Setting OutFile to fd 2844 ...
	I0310 20:52:03.115293   19212 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:52:03.115293   19212 out.go:252] Setting ErrFile to fd 2992...
	I0310 20:52:03.115293   19212 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:52:03.143673   19212 out.go:246] Setting JSON to false
	I0310 20:52:03.154490   19212 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":34989,"bootTime":1615374534,"procs":224,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:52:03.154490   19212 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:52:03.159281   19212 out.go:129] * [default-k8s-different-port-20210310205202-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:52:03.163277   19212 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:52:03.167349   19212 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:52:03.787105   19212 docker.go:119] docker version: linux-20.10.2
	I0310 20:52:03.796869   19212 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:52:05.046753   19212 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2495954s)
	I0310 20:52:05.048359   19212 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:265 OomKillDisable:true NGoroutines:471 SystemTime:2021-03-10 20:52:04.446071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:52:05.051687   19212 out.go:129] * Using the docker driver based on user configuration
	I0310 20:52:05.052186   19212 start.go:276] selected driver: docker
	I0310 20:52:05.052186   19212 start.go:718] validating driver "docker" against <nil>
	I0310 20:52:05.052186   19212 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:52:07.217221   19212 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:52:08.529139   19212 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.3119201s)
	I0310 20:52:08.529666   19212 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:259 OomKillDisable:true NGoroutines:391 SystemTime:2021-03-10 20:52:07.8356701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:52:08.531019   19212 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 20:52:08.532139   19212 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 20:52:08.532139   19212 cni.go:74] Creating CNI manager for ""
	I0310 20:52:08.532139   19212 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:52:08.532498   19212 start_flags.go:398] config:
	{Name:default-k8s-different-port-20210310205202-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:default-k8s-different-port-20210310205202-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:52:08.542321   19212 out.go:129] * Starting control plane node default-k8s-different-port-20210310205202-6496 in cluster default-k8s-different-port-20210310205202-6496
	I0310 20:52:09.261728   19212 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:52:09.262376   19212 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:52:09.262376   19212 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:52:09.262835   19212 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:52:09.263052   19212 cache.go:54] Caching tarball of preloaded images
	I0310 20:52:09.263052   19212 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 20:52:09.263052   19212 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 20:52:09.263424   19212 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\config.json ...
	I0310 20:52:09.263890   19212 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\config.json: {Name:mkdabf3c0e69a842fbccaff4358668e323ccc734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:52:09.285719   19212 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:52:09.287609   19212 start.go:313] acquiring machines lock for default-k8s-different-port-20210310205202-6496: {Name:mk1e80f182340fc0fb72eecdee47861ca36452cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:52:09.288259   19212 start.go:317] acquired machines lock for "default-k8s-different-port-20210310205202-6496" in 650.3??s
	I0310 20:52:09.288259   19212 start.go:89] Provisioning new machine with config: &{Name:default-k8s-different-port-20210310205202-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:default-k8s-different-port-20210310205202-6496 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8444 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 20:52:09.288528   19212 start.go:126] createHost starting for "" (driver="docker")
	I0310 20:52:09.292510   19212 out.go:150] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0310 20:52:09.293506   19212 start.go:160] libmachine.API.Create for "default-k8s-different-port-20210310205202-6496" (driver="docker")
	I0310 20:52:09.293779   19212 client.go:168] LocalClient.Create starting
	I0310 20:52:09.294754   19212 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 20:52:09.294754   19212 main.go:121] libmachine: Decoding PEM data...
	I0310 20:52:09.295116   19212 main.go:121] libmachine: Parsing certificate...
	I0310 20:52:09.295489   19212 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 20:52:09.295918   19212 main.go:121] libmachine: Decoding PEM data...
	I0310 20:52:09.295918   19212 main.go:121] libmachine: Parsing certificate...
	I0310 20:52:09.320768   19212 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210310205202-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 20:52:09.972991   19212 cli_runner.go:162] docker network inspect default-k8s-different-port-20210310205202-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 20:52:09.982573   19212 network_create.go:240] running [docker network inspect default-k8s-different-port-20210310205202-6496] to gather additional debugging logs...
	I0310 20:52:09.982958   19212 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20210310205202-6496
	W0310 20:52:10.641827   19212 cli_runner.go:162] docker network inspect default-k8s-different-port-20210310205202-6496 returned with exit code 1
	I0310 20:52:10.641985   19212 network_create.go:243] error running [docker network inspect default-k8s-different-port-20210310205202-6496]: docker network inspect default-k8s-different-port-20210310205202-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20210310205202-6496
	I0310 20:52:10.641985   19212 network_create.go:245] output of [docker network inspect default-k8s-different-port-20210310205202-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20210310205202-6496
	
	** /stderr **
	I0310 20:52:10.652447   19212 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:52:11.422441   19212 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 20:52:11.423151   19212 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: default-k8s-different-port-20210310205202-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 20:52:11.431663   19212 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20210310205202-6496
	W0310 20:52:12.075414   19212 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20210310205202-6496 returned with exit code 1
	W0310 20:52:12.076628   19212 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 20:52:12.092679   19212 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 20:52:12.787473   19212 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20210310205202-6496 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210310205202-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 20:52:13.428463   19212 oci.go:102] Successfully created a docker volume default-k8s-different-port-20210310205202-6496
	I0310 20:52:13.435392   19212 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20210310205202-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210310205202-6496 --entrypoint /usr/bin/test -v default-k8s-different-port-20210310205202-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 20:52:29.572332   19212 cli_runner.go:168] Completed: docker run --rm --name default-k8s-different-port-20210310205202-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210310205202-6496 --entrypoint /usr/bin/test -v default-k8s-different-port-20210310205202-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (16.1368128s)
	I0310 20:52:29.572332   19212 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20210310205202-6496
	I0310 20:52:29.572894   19212 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:52:29.573487   19212 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:52:29.573638   19212 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 20:52:29.590100   19212 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:52:29.592600   19212 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20210310205202-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	W0310 20:52:30.649805   19212 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20210310205202-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 20:52:30.649805   19212 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20210310205202-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: (1.0572061s)
	I0310 20:52:30.649805   19212 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20210310205202-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 20:52:31.071805   19212 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.4817065s)
	I0310 20:52:31.072234   19212 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:7 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:164 OomKillDisable:true NGoroutines:162 SystemTime:2021-03-10 20:52:30.542769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:52:31.082651   19212 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 20:52:32.131745   19212 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.0490951s)
	I0310 20:52:32.142167   19212 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20210310205202-6496 --name default-k8s-different-port-20210310205202-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210310205202-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20210310205202-6496 --volume default-k8s-different-port-20210310205202-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 20:52:38.976160   19212 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20210310205202-6496 --name default-k8s-different-port-20210310205202-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20210310205202-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20210310205202-6496 --volume default-k8s-different-port-20210310205202-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (6.8336977s)
	I0310 20:52:38.988665   19212 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210310205202-6496 --format={{.State.Running}}
	I0310 20:52:39.615413   19212 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210310205202-6496 --format={{.State.Status}}
	I0310 20:52:40.246169   19212 cli_runner.go:115] Run: docker exec default-k8s-different-port-20210310205202-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 20:52:41.893829   19212 cli_runner.go:168] Completed: docker exec default-k8s-different-port-20210310205202-6496 stat /var/lib/dpkg/alternatives/iptables: (1.6473428s)
	I0310 20:52:41.893829   19212 oci.go:278] the created container "default-k8s-different-port-20210310205202-6496" has a running status.
	I0310 20:52:41.894039   19212 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa...
	I0310 20:52:42.090365   19212 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 20:52:44.753528   19212 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210310205202-6496 --format={{.State.Status}}
	I0310 20:52:45.403282   19212 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 20:52:45.403282   19212 kic_runner.go:115] Args: [docker exec --privileged default-k8s-different-port-20210310205202-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 20:52:47.063665   19212 kic_runner.go:124] Done: [docker exec --privileged default-k8s-different-port-20210310205202-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.6603849s)
	I0310 20:52:47.068596   19212 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa...
	I0310 20:52:47.950023   19212 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210310205202-6496 --format={{.State.Status}}
	I0310 20:52:48.597419   19212 machine.go:88] provisioning docker machine ...
	I0310 20:52:48.597419   19212 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20210310205202-6496"
	I0310 20:52:48.606383   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:52:49.208145   19212 main.go:121] libmachine: Using SSH client type: native
	I0310 20:52:49.208592   19212 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55156 <nil> <nil>}
	I0310 20:52:49.208925   19212 main.go:121] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210310205202-6496 && echo "default-k8s-different-port-20210310205202-6496" | sudo tee /etc/hostname
	I0310 20:52:49.225048   19212 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 20:52:54.226989   19212 main.go:121] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210310205202-6496
	
	I0310 20:52:54.235156   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:52:54.985133   19212 main.go:121] libmachine: Using SSH client type: native
	I0310 20:52:54.993193   19212 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55156 <nil> <nil>}
	I0310 20:52:54.993193   19212 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210310205202-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210310205202-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210310205202-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:52:56.631270   19212 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:52:56.631683   19212 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:52:56.631683   19212 ubuntu.go:177] setting up certificates
	I0310 20:52:56.631683   19212 provision.go:83] configureAuth start
	I0310 20:52:56.638949   19212 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210310205202-6496
	I0310 20:52:57.269792   19212 provision.go:137] copyHostCerts
	I0310 20:52:57.270405   19212 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:52:57.270405   19212 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:52:57.270885   19212 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:52:57.275769   19212 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:52:57.275915   19212 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:52:57.280406   19212 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:52:57.280406   19212 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:52:57.280406   19212 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:52:57.280406   19212 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:52:57.280406   19212 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.default-k8s-different-port-20210310205202-6496 san=[172.17.0.9 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20210310205202-6496]
	I0310 20:52:57.545773   19212 provision.go:165] copyRemoteCerts
	I0310 20:52:57.560714   19212 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:52:57.568827   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:52:58.202620   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 20:52:58.983789   19212 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4230771s)
	I0310 20:52:58.984989   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:52:59.795250   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1306 bytes)
	I0310 20:53:00.611552   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 20:53:01.602679   19212 provision.go:86] duration metric: configureAuth took 4.9710032s
	I0310 20:53:01.602943   19212 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:53:01.621736   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:53:02.319621   19212 main.go:121] libmachine: Using SSH client type: native
	I0310 20:53:02.320904   19212 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55156 <nil> <nil>}
	I0310 20:53:02.321103   19212 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:53:04.170400   19212 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:53:04.170510   19212 ubuntu.go:71] root file system type: overlay
	I0310 20:53:04.171277   19212 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:53:04.178742   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:53:04.859279   19212 main.go:121] libmachine: Using SSH client type: native
	I0310 20:53:04.863201   19212 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55156 <nil> <nil>}
	I0310 20:53:04.863201   19212 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:53:06.575689   19212 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:53:06.584018   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:53:07.232077   19212 main.go:121] libmachine: Using SSH client type: native
	I0310 20:53:07.232739   19212 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55156 <nil> <nil>}
	I0310 20:53:07.232739   19212 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:53:28.636828   19212 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 20:53:06.534922000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 20:53:28.636828   19212 machine.go:91] provisioned docker machine in 40.0394612s
	I0310 20:53:28.636828   19212 client.go:171] LocalClient.Create took 1m19.3431522s
	I0310 20:53:28.636828   19212 start.go:168] duration metric: libmachine.API.Create for "default-k8s-different-port-20210310205202-6496" took 1m19.3434255s
	I0310 20:53:28.636828   19212 start.go:267] post-start starting for "default-k8s-different-port-20210310205202-6496" (driver="docker")
	I0310 20:53:28.636828   19212 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:53:28.648348   19212 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:53:28.659160   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:53:29.268024   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 20:53:29.685692   19212 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0368464s)
	I0310 20:53:29.695299   19212 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:53:29.741570   19212 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:53:29.741836   19212 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:53:29.741836   19212 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:53:29.741836   19212 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:53:29.741982   19212 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:53:29.742635   19212 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:53:29.753227   19212 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:53:29.754059   19212 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:53:29.765542   19212 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:53:29.890142   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:53:30.288551   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:53:30.988390   19212 start.go:270] post-start completed in 2.3515645s
	I0310 20:53:31.039289   19212 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210310205202-6496
	I0310 20:53:31.716453   19212 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\config.json ...
	I0310 20:53:31.741528   19212 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:53:31.754000   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:53:32.377958   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 20:53:33.044141   19212 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3026151s)
	I0310 20:53:33.045131   19212 start.go:129] duration metric: createHost completed in 1m23.7567119s
	I0310 20:53:33.045131   19212 start.go:80] releasing machines lock for "default-k8s-different-port-20210310205202-6496", held for 1m23.7569804s
	I0310 20:53:33.063726   19212 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20210310205202-6496
	I0310 20:53:33.667760   19212 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:53:33.677976   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:53:33.678867   19212 ssh_runner.go:149] Run: systemctl --version
	I0310 20:53:33.686516   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:53:34.332852   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 20:53:34.373238   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 20:53:34.913158   19212 ssh_runner.go:189] Completed: systemctl --version: (1.233922s)
	I0310 20:53:34.922900   19212 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:53:35.533817   19212 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.8659383s)
	I0310 20:53:35.544172   19212 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:53:35.735007   19212 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:53:35.748974   19212 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:53:36.015487   19212 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:53:36.516813   19212 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:53:36.658323   19212 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:53:38.185343   19212 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.5270216s)
	I0310 20:53:38.211464   19212 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 20:53:38.411077   19212 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:53:39.629034   19212 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (1.217958s)
	I0310 20:53:39.632789   19212 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 20:53:39.636279   19212 cli_runner.go:115] Run: docker exec -t default-k8s-different-port-20210310205202-6496 dig +short host.docker.internal
	I0310 20:53:40.908756   19212 cli_runner.go:168] Completed: docker exec -t default-k8s-different-port-20210310205202-6496 dig +short host.docker.internal: (1.2724786s)
	I0310 20:53:40.908969   19212 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:53:40.921965   19212 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:53:40.946693   19212 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:53:41.154900   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 20:53:41.739951   19212 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\client.crt
	I0310 20:53:41.749190   19212 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\client.key
	I0310 20:53:41.752472   19212 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 20:53:41.752472   19212 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 20:53:41.760411   19212 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:53:42.301615   19212 docker.go:423] Got preloaded images: 
	I0310 20:53:42.301615   19212 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 20:53:42.311827   19212 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:53:42.386991   19212 ssh_runner.go:149] Run: which lz4
	I0310 20:53:42.456095   19212 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 20:53:42.512507   19212 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 20:53:42.513882   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 20:55:07.443676   19212 docker.go:388] Took 84.997318 seconds to copy over tarball
	I0310 20:55:07.447748   19212 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 20:56:20.886818   19212 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1m13.4397987s)
	I0310 20:56:20.887044   19212 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 20:56:24.263300   19212 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:56:24.404566   19212 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 20:56:24.798855   19212 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:56:26.569462   19212 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.7706224s)
	I0310 20:56:26.586319   19212 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:56:35.393558   19212 ssh_runner.go:189] Completed: sudo systemctl restart docker: (8.8071561s)
	I0310 20:56:35.407421   19212 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:56:37.168698   19212 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.7612921s)
	I0310 20:56:37.168698   19212 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 20:56:37.168698   19212 cache_images.go:73] Images are preloaded, skipping loading
	I0310 20:56:37.180757   19212 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 20:56:41.703204   19212 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (4.5224841s)
	I0310 20:56:41.704076   19212 cni.go:74] Creating CNI manager for ""
	I0310 20:56:41.704076   19212 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:56:41.704076   19212 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 20:56:41.704076   19212 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.9 APIServerPort:8444 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210310205202-6496 NodeName:default-k8s-different-port-20210310205202-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.9 CgroupDriver
:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 20:56:41.704887   19212 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.9
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20210310205202-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 20:56:41.705285   19212 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20210310205202-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:default-k8s-different-port-20210310205202-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0310 20:56:41.710239   19212 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 20:56:41.843968   19212 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 20:56:41.854898   19212 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 20:56:41.993267   19212 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (370 bytes)
	I0310 20:56:42.237577   19212 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 20:56:42.491756   19212 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1872 bytes)
	I0310 20:56:42.977624   19212 ssh_runner.go:149] Run: grep 172.17.0.9	control-plane.minikube.internal$ /etc/hosts
	I0310 20:56:43.087645   19212 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:56:43.399931   19212 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496 for IP: 172.17.0.9
	I0310 20:56:43.400242   19212 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 20:56:43.400644   19212 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 20:56:43.401489   19212 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\client.key
	I0310 20:56:43.401489   19212 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.key.fccbd15f
	I0310 20:56:43.401922   19212 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.crt.fccbd15f with IP's: [172.17.0.9 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 20:56:43.681240   19212 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.crt.fccbd15f ...
	I0310 20:56:43.681240   19212 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.crt.fccbd15f: {Name:mkb782d456751c9b4f5ce0ad7859c66d7d788e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:56:43.702669   19212 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.key.fccbd15f ...
	I0310 20:56:43.702669   19212 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.key.fccbd15f: {Name:mkf7f221ce2b4efb48f797afae9ccc6c91d2e1f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:56:43.724218   19212 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.crt.fccbd15f -> C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.crt
	I0310 20:56:43.727740   19212 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.key.fccbd15f -> C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.key
	I0310 20:56:43.733570   19212 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\proxy-client.key
	I0310 20:56:43.733570   19212 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\proxy-client.crt with IP's: []
	I0310 20:56:44.006903   19212 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\proxy-client.crt ...
	I0310 20:56:44.006903   19212 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\proxy-client.crt: {Name:mk54a1bdcd2353cfefcb664d4912169c85f768da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:56:44.026872   19212 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\proxy-client.key ...
	I0310 20:56:44.026872   19212 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\proxy-client.key: {Name:mk6894795961811a075dbc97a68c0e9c703a8850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:56:44.056595   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 20:56:44.057375   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.057375   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 20:56:44.057946   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.058666   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 20:56:44.062517   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.062517   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 20:56:44.063176   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.063585   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 20:56:44.064152   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.064152   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 20:56:44.064896   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.065159   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 20:56:44.065555   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.065555   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 20:56:44.065555   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.065555   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 20:56:44.066290   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.066590   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 20:56:44.067093   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.067434   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 20:56:44.067753   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.067753   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 20:56:44.068255   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.068255   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 20:56:44.069371   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.069675   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 20:56:44.070059   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.070059   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 20:56:44.071069   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.071300   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 20:56:44.071784   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.071784   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 20:56:44.071784   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.072788   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 20:56:44.073245   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.073519   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 20:56:44.073519   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.073519   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 20:56:44.074162   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.074162   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 20:56:44.074162   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.075162   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 20:56:44.075162   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.075162   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 20:56:44.075162   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.075162   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 20:56:44.076158   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.076158   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 20:56:44.076158   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.076158   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 20:56:44.077172   19212 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 20:56:44.077172   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 20:56:44.083232   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 20:56:44.083232   19212 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 20:56:44.092156   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 20:56:44.614291   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0310 20:56:45.085490   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 20:56:45.475411   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\default-k8s-different-port-20210310205202-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 20:56:45.893072   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 20:56:46.156983   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 20:56:46.678908   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 20:56:47.231398   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 20:56:47.722604   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 20:56:48.034003   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 20:56:48.259055   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 20:56:48.534768   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 20:56:48.972400   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 20:56:49.442113   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 20:56:50.054515   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 20:56:50.591383   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 20:56:51.227631   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 20:56:51.627904   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 20:56:52.191367   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 20:56:52.864044   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 20:56:53.145430   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 20:56:53.662100   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 20:56:54.434726   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 20:56:55.197568   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 20:56:55.613328   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 20:56:56.218117   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 20:56:56.718299   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 20:56:56.946255   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 20:56:57.185162   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 20:56:57.733362   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 20:56:58.255426   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 20:56:59.037927   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 20:56:59.745935   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 20:57:00.212382   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 20:57:00.862497   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 20:57:01.365761   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 20:57:01.783912   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 20:57:02.208131   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 20:57:02.951770   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 20:57:03.303291   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 20:57:03.896632   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 20:57:04.334616   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 20:57:04.818722   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 20:57:05.347909   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 20:57:05.979549   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 20:57:06.561593   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 20:57:07.217600   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 20:57:07.677967   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 20:57:08.213393   19212 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 20:57:08.447726   19212 ssh_runner.go:149] Run: openssl version
	I0310 20:57:08.508691   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 20:57:08.615514   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 20:57:08.652883   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 20:57:08.664595   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 20:57:08.775982   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:08.893941   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 20:57:08.997555   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 20:57:09.045623   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 20:57:09.060763   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 20:57:09.099336   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:09.180391   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 20:57:09.235438   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 20:57:09.258282   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 20:57:09.269990   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 20:57:09.313156   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:09.407280   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 20:57:09.468484   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 20:57:09.489516   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 20:57:09.501734   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 20:57:09.562595   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:09.620484   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 20:57:09.700613   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 20:57:09.750614   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 20:57:09.757656   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 20:57:09.822280   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:09.889941   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 20:57:09.945569   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 20:57:09.972156   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 20:57:09.991409   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 20:57:10.042725   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:10.136075   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 20:57:10.221543   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 20:57:10.261174   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 20:57:10.272105   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 20:57:10.365197   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:10.419092   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 20:57:10.505975   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 20:57:10.532133   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 20:57:10.551574   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 20:57:10.655506   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:10.713285   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 20:57:10.780756   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 20:57:10.817741   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 20:57:10.836148   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 20:57:10.903762   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:10.997036   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 20:57:11.123138   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 20:57:11.154582   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 20:57:11.166106   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 20:57:11.247333   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:11.320419   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 20:57:11.604475   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 20:57:11.650406   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 20:57:11.661493   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 20:57:11.733559   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:11.806547   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 20:57:11.872475   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 20:57:11.901208   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 20:57:11.915014   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 20:57:11.992007   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:12.124679   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 20:57:12.195097   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 20:57:12.231664   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 20:57:12.243195   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 20:57:12.289556   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:12.369218   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 20:57:12.459474   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 20:57:12.499718   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 20:57:12.511801   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 20:57:12.552864   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:12.611474   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 20:57:12.681513   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 20:57:12.708759   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 20:57:12.719694   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 20:57:12.765077   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:12.863206   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 20:57:12.956663   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:57:12.984265   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:57:12.997190   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:57:13.037401   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 20:57:13.147507   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 20:57:13.233958   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 20:57:13.284645   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 20:57:13.295311   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 20:57:13.343766   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:13.408668   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 20:57:13.555385   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 20:57:13.589099   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 20:57:13.609291   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 20:57:13.658993   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:13.722323   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 20:57:13.836011   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 20:57:13.861909   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 20:57:13.874814   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 20:57:13.919737   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:14.006178   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 20:57:14.094299   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 20:57:14.127232   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 20:57:14.137545   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 20:57:14.192266   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:14.383259   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 20:57:14.487132   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 20:57:14.537325   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 20:57:14.556280   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 20:57:14.610892   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:14.683470   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 20:57:14.863013   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 20:57:14.910258   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 20:57:14.919520   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 20:57:15.056260   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:15.154541   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 20:57:15.243407   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 20:57:15.326922   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 20:57:15.331904   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 20:57:15.425730   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:15.496229   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 20:57:15.563592   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 20:57:15.590574   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 20:57:15.606747   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 20:57:15.728205   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:15.848672   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 20:57:15.949593   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 20:57:15.983651   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 20:57:15.995739   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 20:57:16.104244   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:16.183126   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 20:57:16.278046   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 20:57:16.319491   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 20:57:16.330947   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 20:57:16.382509   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:16.440971   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 20:57:16.526508   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 20:57:16.570001   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 20:57:16.590350   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 20:57:16.682979   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:16.741253   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 20:57:16.841134   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 20:57:16.880577   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 20:57:16.892169   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 20:57:16.976353   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:17.119246   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 20:57:17.216188   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 20:57:17.265837   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 20:57:17.275349   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 20:57:17.346158   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:17.431541   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 20:57:17.534447   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 20:57:17.568410   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 20:57:17.572136   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 20:57:17.636921   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:17.726377   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 20:57:17.785987   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 20:57:17.811799   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 20:57:17.827919   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 20:57:17.939545   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:18.004088   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 20:57:18.085282   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 20:57:18.123594   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 20:57:18.136908   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 20:57:18.203889   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:18.319034   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 20:57:18.409315   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 20:57:18.501535   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 20:57:18.520355   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 20:57:18.597903   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:18.727987   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 20:57:18.888745   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 20:57:18.940936   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 20:57:18.949013   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 20:57:19.035075   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:19.143269   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 20:57:19.231573   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 20:57:19.258933   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 20:57:19.265304   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 20:57:19.340973   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:19.435952   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 20:57:19.515915   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 20:57:19.545667   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 20:57:19.564561   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 20:57:19.629238   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:19.690762   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 20:57:19.773279   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 20:57:19.796775   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 20:57:19.813294   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 20:57:19.874200   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:19.958288   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 20:57:20.045423   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 20:57:20.081675   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 20:57:20.094241   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 20:57:20.133672   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:20.200875   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 20:57:20.280799   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 20:57:20.332789   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 20:57:20.338119   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 20:57:20.409227   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:20.505704   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 20:57:20.583029   19212 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 20:57:20.627201   19212 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 20:57:20.637910   19212 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 20:57:20.686519   19212 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 20:57:20.745628   19212 kubeadm.go:385] StartCluster: {Name:default-k8s-different-port-20210310205202-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:default-k8s-different-port-20210310205202-6496 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:172.17.0.9 Port:8444 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:57:20.754004   19212 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:57:21.253152   19212 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 20:57:21.311754   19212 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:57:21.388587   19212 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:57:21.396302   19212 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:57:21.493039   19212 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:57:21.493336   19212 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:01:52.086989   19212 out.go:150]   - Generating certificates and keys ...
	I0310 21:01:52.092089   19212 out.go:150]   - Booting up control plane ...
	I0310 21:01:52.096564   19212 out.go:150]   - Configuring RBAC rules ...
	I0310 21:01:52.102614   19212 cni.go:74] Creating CNI manager for ""
	I0310 21:01:52.102931   19212 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:01:52.102931   19212 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0310 21:01:52.114182   19212 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:01:52.118185   19212 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=default-k8s-different-port-20210310205202-6496 minikube.k8s.io/updated_at=2021_03_10T21_01_52_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:01:53.382231   19212 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.2793047s)
	I0310 21:01:53.382843   19212 ops.go:34] apiserver oom_adj: -16
	I0310 21:02:04.571434   19212 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (12.4572927s)
	I0310 21:02:04.589918   19212 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:02:22.391396   19212 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=default-k8s-different-port-20210310205202-6496 minikube.k8s.io/updated_at=2021_03_10T21_01_52_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (30.2725385s)
	I0310 21:02:33.663314   19212 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (29.0734861s)
	I0310 21:02:34.176057   19212 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:02:47.625195   19212 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (13.4488926s)
	I0310 21:02:47.625195   19212 kubeadm.go:995] duration metric: took 55.5224371s to wait for elevateKubeSystemPrivileges.
	I0310 21:02:47.625459   19212 kubeadm.go:387] StartCluster complete in 5m26.8813357s
	I0310 21:02:47.625641   19212 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:02:47.626339   19212 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	I0310 21:02:47.628927   19212 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:02:47.910358   19212 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210310205202-6496" rescaled to 1
	I0310 21:02:47.910725   19212 start.go:203] Will wait 6m0s for node up to 
	I0310 21:02:47.914314   19212 out.go:129] * Verifying Kubernetes components...
	I0310 21:02:47.914314   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:02:47.914722   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:02:47.914722   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:02:47.914722   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:02:47.915483   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:02:47.915483   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:02:47.915483   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:02:47.913263   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:02:47.915483   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:02:47.916978   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:02:47.917448   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:02:47.918628   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:02:47.918628   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:02:47.919676   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:02:47.920625   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:02:47.920625   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:02:47.925513   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:02:47.913263   19212 addons.go:381] enableAddons start: toEnable=map[], additional=[]
	I0310 21:02:47.927513   19212 addons.go:58] Setting storage-provisioner=true in profile "default-k8s-different-port-20210310205202-6496"
	I0310 21:02:47.927513   19212 addons.go:134] Setting addon storage-provisioner=true in "default-k8s-different-port-20210310205202-6496"
	W0310 21:02:47.927513   19212 addons.go:143] addon storage-provisioner should already be in state true
	I0310 21:02:47.914722   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:02:47.927991   19212 host.go:66] Checking if "default-k8s-different-port-20210310205202-6496" exists ...
	I0310 21:02:47.914722   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:02:47.931951   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:02:47.914722   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:02:47.933995   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:02:47.934240   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:02:47.935117   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:02:47.914722   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:02:47.936030   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:02:47.937500   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:02:47.914722   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:02:47.914722   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:02:47.944425   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:02:47.945623   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:02:47.947436   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:02:47.914722   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:02:47.948424   19212 addons.go:58] Setting default-storageclass=true in profile "default-k8s-different-port-20210310205202-6496"
	I0310 21:02:47.948424   19212 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210310205202-6496"
	I0310 21:02:47.948424   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:02:48.294795   19212 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 21:02:48.418851   19212 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:48.419127   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	I0310 21:02:48.419577   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 505.2645ms
	I0310 21:02:48.419577   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	I0310 21:02:48.532174   19212 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210310205202-6496 --format={{.State.Status}}
	I0310 21:02:48.579617   19212 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210310205202-6496 --format={{.State.Status}}
	I0310 21:02:48.670446   19212 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:48.673378   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	I0310 21:02:48.680603   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 765.8838ms
	I0310 21:02:48.680603   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	I0310 21:02:48.784944   19212 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:48.785880   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	I0310 21:02:48.786710   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 871.9913ms
	I0310 21:02:48.786710   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	I0310 21:02:49.099735   19212 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.100843   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	I0310 21:02:49.100843   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.1853626s
	I0310 21:02:49.101167   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	I0310 21:02:49.177417   19212 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.186488   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	I0310 21:02:49.186942   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.2683174s
	I0310 21:02:49.186942   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	I0310 21:02:49.256315   19212 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.256854   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	I0310 21:02:49.257875   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.3251555s
	I0310 21:02:49.257875   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	I0310 21:02:49.273588   19212 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.273588   19212 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.273588   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	I0310 21:02:49.274414   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	I0310 21:02:49.274414   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.3574397s
	I0310 21:02:49.274414   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	I0310 21:02:49.274842   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.3593621s
	I0310 21:02:49.274842   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	I0310 21:02:49.276669   19212 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.276669   19212 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.277101   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	I0310 21:02:49.277101   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.3596566s
	I0310 21:02:49.277414   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	I0310 21:02:49.277414   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	I0310 21:02:49.278016   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.3625366s
	I0310 21:02:49.278016   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	I0310 21:02:49.299083   19212 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.300054   19212 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.301053   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	I0310 21:02:49.302095   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 1.3782113s
	I0310 21:02:49.302095   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	I0310 21:02:49.303054   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	I0310 21:02:49.304044   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 1.3689311s
	I0310 21:02:49.304044   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	I0310 21:02:49.310874   19212 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.311059   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	I0310 21:02:49.311618   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.396139s
	I0310 21:02:49.311618   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	I0310 21:02:49.366439   19212 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.366439   19212 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.367434   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	I0310 21:02:49.367434   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.4261803s
	I0310 21:02:49.367434   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	I0310 21:02:49.367434   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	I0310 21:02:49.368658   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.436711s
	I0310 21:02:49.368658   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	I0310 21:02:49.392457   19212 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.393493   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	I0310 21:02:49.393493   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.4579756s
	I0310 21:02:49.393493   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	I0310 21:02:49.396760   19212 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.397477   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	I0310 21:02:49.397902   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.4721344s
	I0310 21:02:49.397902   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	I0310 21:02:49.416791   19212 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.417471   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	I0310 21:02:49.417706   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.5022273s
	I0310 21:02:49.418064   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	I0310 21:02:49.468045   19212 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.468594   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	I0310 21:02:49.469291   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.5350555s
	I0310 21:02:49.469291   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	I0310 21:02:49.474900   19212 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.475515   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	I0310 21:02:49.475515   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 1.5380189s
	I0310 21:02:49.475829   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	I0310 21:02:49.486471   19212 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.487211   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	I0310 21:02:49.488711   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.5700874s
	I0310 21:02:49.488711   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	I0310 21:02:49.490094   19212 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.490800   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	I0310 21:02:49.491030   19212 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.491590   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.5573546s
	I0310 21:02:49.491900   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	I0310 21:02:49.491590   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	I0310 21:02:49.492222   19212 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.492488   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.572817s
	I0310 21:02:49.492488   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	I0310 21:02:49.492800   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	I0310 21:02:49.493306   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.5488864s
	I0310 21:02:49.493306   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	I0310 21:02:49.510762   19212 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.511047   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	I0310 21:02:49.511394   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.5657748s
	I0310 21:02:49.511785   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	I0310 21:02:49.523347   19212 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.524403   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	I0310 21:02:49.525034   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.577602s
	I0310 21:02:49.525034   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	I0310 21:02:49.533352   19212 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.534801   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	I0310 21:02:49.535028   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.6034866s
	I0310 21:02:49.535028   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	I0310 21:02:49.564402   19212 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.564402   19212 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.564702   19212 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.564702   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	I0310 21:02:49.564702   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	I0310 21:02:49.565530   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	I0310 21:02:49.564702   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.6281869s
	I0310 21:02:49.565530   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	I0310 21:02:49.566023   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 1.6382328s
	I0310 21:02:49.566402   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	I0310 21:02:49.566023   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.6454029s
	I0310 21:02:49.567217   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	I0310 21:02:49.566023   19212 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.567937   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	I0310 21:02:49.568065   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.6189221s
	I0310 21:02:49.568065   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	I0310 21:02:49.580066   19212 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.580066   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	I0310 21:02:49.580470   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.6360504s
	I0310 21:02:49.581460   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	I0310 21:02:49.588349   19212 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.588665   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	I0310 21:02:49.588665   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.668045s
	I0310 21:02:49.588665   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	I0310 21:02:49.592693   19212 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:49.593022   19212 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	I0310 21:02:49.593243   19212 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.645101s
	I0310 21:02:49.593243   19212 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	I0310 21:02:49.593243   19212 cache.go:73] Successfully saved all images to host disk.
	I0310 21:02:49.630514   19212 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210310205202-6496 --format={{.State.Status}}
	I0310 21:02:50.077906   19212 cli_runner.go:168] Completed: docker container inspect default-k8s-different-port-20210310205202-6496 --format={{.State.Status}}: (1.5456176s)
	I0310 21:02:50.162702   19212 cli_runner.go:168] Completed: docker container inspect default-k8s-different-port-20210310205202-6496 --format={{.State.Status}}: (1.5830898s)
	I0310 21:02:50.166659   19212 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 21:02:50.166659   19212 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 21:02:50.167176   19212 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0310 21:02:50.174460   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:02:50.382765   19212 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:02:50.393195   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:02:50.807099   19212 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (2.5120403s)
	I0310 21:02:50.822765   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:02:50.915138   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:02:51.079594   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:02:51.480056   19212 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	I0310 21:02:51.480056   19212 pod_ready.go:59] waiting 6m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	I0310 21:02:52.005828   19212 addons.go:134] Setting addon default-storageclass=true in "default-k8s-different-port-20210310205202-6496"
	W0310 21:02:52.005828   19212 addons.go:143] addon default-storageclass should already be in state true
	I0310 21:02:52.006204   19212 host.go:66] Checking if "default-k8s-different-port-20210310205202-6496" exists ...
	I0310 21:02:52.025841   19212 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20210310205202-6496 --format={{.State.Status}}
	I0310 21:02:52.487841   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:02:52.700058   19212 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	I0310 21:02:52.700058   19212 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0310 21:02:52.707727   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:02:53.323983   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:02:57.257235   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:02:58.934374   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:01.388017   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:02.809011   19212 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 21:03:03.106912   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:04.147466   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:05.496227   19212 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0310 21:03:05.580523   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:07.059640   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:08.724224   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:10.289153   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:11.299015   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:12.659760   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:13.281607   19212 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (22.8986106s)
	I0310 21:03:13.281607   19212 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:03:13.281607   19212 docker.go:429] minikube-local-cache-test:functional-20210119220838-6552 wasn't preloaded
	I0310 21:03:13.281607   19212 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210119220838-6552 minikube-local-cache-test:functional-20210126212539-5172 minikube-local-cache-test:functional-20210128021318-232 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210107190945-8748 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210114204234-6692 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210225231842-5736 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210213143925-7440 minikube-local-cache-test:function
al-20210308233820-5396 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210106002159-6856 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:functional-20210303214129-4588 minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210304002630-1156 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functional-20210304184021-4052 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210310191609-6496]
	I0310 21:03:13.358887   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396
	I0310 21:03:13.386426   19212 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	I0310 21:03:13.403518   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024
	I0310 21:03:13.406099   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552
	I0310 21:03:13.417912   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520
	I0310 21:03:13.452455   19212 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	I0310 21:03:13.472990   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156
	I0310 21:03:13.476668   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310083645-5040
	I0310 21:03:13.503613   19212 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	I0310 21:03:13.520723   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210301195830-5700
	I0310 21:03:13.523734   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464
	I0310 21:03:13.545895   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440
	I0310 21:03:13.558021   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120214442-10992
	I0310 21:03:13.574533   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120022529-1140
	I0310 21:03:13.587671   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210123004019-5372
	I0310 21:03:13.589053   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516
	I0310 21:03:13.624764   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432
	I0310 21:03:13.644475   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310191609-6496
	I0310 21:03:13.666299   19212 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	I0310 21:03:13.696541   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920
	I0310 21:03:13.700384   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944
	W0310 21:03:13.707497   19212 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:03:13.755562   19212 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	W0310 21:03:13.756234   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:03:13.756460   19212 retry.go:31] will retry after 276.165072ms: ssh: rejected: connect failed (open failed)
	W0310 21:03:13.756460   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:03:13.756692   19212 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	W0310 21:03:13.756692   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:03:13.756692   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:03:13.756692   19212 retry.go:31] will retry after 234.428547ms: ssh: rejected: connect failed (open failed)
	I0310 21:03:13.756692   19212 retry.go:31] will retry after 291.140013ms: ssh: rejected: connect failed (open failed)
	W0310 21:03:13.756692   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:03:13.756692   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:03:13.756908   19212 retry.go:31] will retry after 296.705768ms: ssh: rejected: connect failed (open failed)
	I0310 21:03:13.756692   19212 retry.go:31] will retry after 231.159374ms: ssh: rejected: connect failed (open failed)
	W0310 21:03:13.756692   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:03:13.756908   19212 retry.go:31] will retry after 141.409254ms: ssh: rejected: connect failed (open failed)
	I0310 21:03:13.766058   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452
	I0310 21:03:13.779033   19212 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	I0310 21:03:13.792618   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:13.799247   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692
	I0310 21:03:13.805728   19212 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	I0310 21:03:13.818543   19212 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	I0310 21:03:13.819818   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:13.826574   19212 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	I0310 21:03:13.873844   19212 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	I0310 21:03:13.880698   19212 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	I0310 21:03:13.904785   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:13.915050   19212 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	I0310 21:03:13.968748   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352
	I0310 21:03:13.983668   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:13.994658   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056
	I0310 21:03:13.999376   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588
	I0310 21:03:14.014665   19212 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:14.014665   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	I0310 21:03:14.014665   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:03:14.014665   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:03:14.018905   19212 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	I0310 21:03:14.030282   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052
	I0310 21:03:14.030282   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172
	I0310 21:03:14.030786   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.036880   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.045055   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.045055   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.082049   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.096038   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210224014800-800
	I0310 21:03:14.098771   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.099054   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.099459   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.110139   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.114602   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:03:14.138438   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736
	I0310 21:03:14.140120   19212 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210128021318-232
	I0310 21:03:14.148365   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.160968   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.164606   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.164606   19212 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	I0310 21:03:14.195691   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.199652   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	W0310 21:03:14.206012   19212 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:03:14.215268   19212 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:03:14.341529   19212 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:14.341529   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	I0310 21:03:14.341529   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:03:14.341529   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:03:14.361026   19212 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:14.361026   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	I0310 21:03:14.361026   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:03:14.361026   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:03:14.364053   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	W0310 21:03:14.365297   19212 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:03:14.394023   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:03:14.396027   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:14.421026   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	W0310 21:03:14.437908   19212 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:03:14.476543   19212 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:14.476543   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	I0310 21:03:14.476543   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:03:14.476543   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:03:14.515496   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:03:14.545098   19212 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:14.545098   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	I0310 21:03:14.545098   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:03:14.545098   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	W0310 21:03:14.591918   19212 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:03:14.630237   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:14.702717   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	W0310 21:03:14.726466   19212 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:03:14.740090   19212 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:14.740527   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	I0310 21:03:14.740527   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:03:14.740527   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:03:14.831895   19212 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:03:14.832108   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	I0310 21:03:14.832108   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:03:14.832108   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:03:14.867736   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:03:14.992975   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:03:15.057713   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:15.076484   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:15.096095   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:03:15.127002   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:15.812898   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:15.827110   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.0066197s)
	I0310 21:03:15.827110   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:15.881181   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.088569s)
	I0310 21:03:15.881181   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:15.962476   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.9315241s)
	I0310 21:03:15.962476   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.036486   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.9904334s)
	I0310 21:03:16.036946   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.070465   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.0244117s)
	I0310 21:03:16.070877   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.143681   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.9470366s)
	I0310 21:03:16.143681   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.144222   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.0610175s)
	I0310 21:03:16.144222   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.162800   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.1259256s)
	I0310 21:03:16.162800   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.237788   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.0894287s)
	I0310 21:03:16.238484   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.265881   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.1047585s)
	I0310 21:03:16.270571   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.315583   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.2156665s)
	I0310 21:03:16.315984   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.323030   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.1584307s)
	I0310 21:03:16.323030   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.376194   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.3921979s)
	I0310 21:03:16.376457   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.377720   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.277804s)
	I0310 21:03:16.377720   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.394746   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.1947431s)
	I0310 21:03:16.395705   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.435139   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.5303611s)
	I0310 21:03:16.435529   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.455743   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.0347224s)
	I0310 21:03:16.456255   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.496295   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.100274s)
	I0310 21:03:16.496944   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.512684   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.809972s)
	I0310 21:03:16.515764   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.535115   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.4249833s)
	I0310 21:03:16.535557   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.539471   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.4817623s)
	I0310 21:03:16.539724   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.556785   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.4573324s)
	I0310 21:03:16.557100   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.576889   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.4900844s)
	I0310 21:03:16.578498   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:16.684676   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.5576789s)
	I0310 21:03:16.685336   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:17.112785   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:18.235363   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:18.235716   19212 retry.go:31] will retry after 164.129813ms: ssh: handshake failed: EOF
	I0310 21:03:18.582529   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:20.376935   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:21.612765   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:22.523669   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:22.524002   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:22.524002   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	I0310 21:03:22.532074   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	W0310 21:03:22.539432   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:22.539432   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: "minikube-local-cache-test:functional-20210120022529-1140" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:03:22.540248   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:03:22.540248   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:03:22.555669   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:03:22.563224   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:23.157160   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:23.186305   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:23.274369   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:23.283191   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.283191   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:03:23.283191   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:03:23.284519   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:03:23.286512   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.287035   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.287309   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.287309   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:03:23.287309   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:03:23.287541   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.287541   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:03:23.287541   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:03:23.287541   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.287810   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:03:23.287810   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:03:23.286750   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:03:23.287541   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.289252   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:03:23.289252   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:03:23.287309   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:03:23.289469   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.290661   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:03:23.290661   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:03:23.290811   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:03:23.291008   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.291552   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:03:23.291552   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:03:23.292119   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:03:23.292119   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.293475   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:03:23.293475   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.293774   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.287309   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.287541   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.284519   19212 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:03:23.288534   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:03:23.290498   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:03:23.293475   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:03:23.293992   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:03:23.293992   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:03:23.293992   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:03:23.293992   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:03:23.293992   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:03:23.293992   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:03:23.293992   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:03:23.293992   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:03:23.294819   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:03:23.294819   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:03:23.295717   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:03:23.295717   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:03:23.295717   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:03:23.295717   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:03:23.295717   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:03:23.296067   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:03:23.295717   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:03:23.296067   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:03:23.296638   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:03:23.295717   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:03:23.295717   19212 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:03:23.297323   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:03:23.296409   19212 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:03:23.365194   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:03:23.396887   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	W0310 21:03:23.427255   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:23.427255   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:23.427629   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	W0310 21:03:23.444639   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:23.539856   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 21:03:23.589671   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:03:23.638323   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:03:23.639855   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:03:23.643990   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:03:23.776759   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:03:23.777042   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:03:23.779693   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:03:23.779693   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:03:23.780654   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:03:23.784713   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:03:23.790166   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:23.806780   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:23.834166   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:23.836628   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:03:23.871118   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:03:23.876490   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:03:23.895248   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:03:23.899057   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:03:23.916964   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:23.924974   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:03:23.934038   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 21:03:23.936005   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:03:23.957385   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:03:23.963313   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 21:03:23.964244   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:03:23.964244   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:03:23.964244   19212 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:03:23.966043   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:23.982944   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:23.983162   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:23.991067   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.048291   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.083886   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.085708   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.086443   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.087957   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.100870   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.103929   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.141303   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.141303   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.149298   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.177744   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.233407   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.259965   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.260436   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.278972   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.283964   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.301718   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.303718   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:24.311184   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:03:25.589539   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:25.827837   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.0376765s)
	I0310 21:03:25.827837   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:25.853312   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.0465382s)
	I0310 21:03:25.853649   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.052260   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.2181002s)
	I0310 21:03:26.052817   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.167974   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.083783s)
	I0310 21:03:26.168580   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.188845   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.2054277s)
	I0310 21:03:26.188845   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.271573   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.1302762s)
	I0310 21:03:26.271818   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.284523   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.3675655s)
	I0310 21:03:26.284833   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.322121   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.3557363s)
	I0310 21:03:26.322323   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.323141   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.2351903s)
	I0310 21:03:26.323409   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.412385   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.1787217s)
	I0310 21:03:26.412385   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.414780   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.1548215s)
	I0310 21:03:26.414913   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.494583   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.2156166s)
	I0310 21:03:26.494583   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.1928713s)
	I0310 21:03:26.495141   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.495493   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.494583   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.1830805s)
	I0310 21:03:26.496722   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.523459   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.437757s)
	I0310 21:03:26.523967   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.554337   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.5632767s)
	I0310 21:03:26.554693   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.558078   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.38034s)
	I0310 21:03:26.558327   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.585248   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.281536s)
	I0310 21:03:26.586294   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.600892   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.4595953s)
	I0310 21:03:26.601261   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.4516005s)
	I0310 21:03:26.601648   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.601898   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.649344   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.6010597s)
	I0310 21:03:26.650149   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.653968   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.550045s)
	I0310 21:03:26.653968   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.654326   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.5674969s)
	I0310 21:03:26.656536   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.703456   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.4430267s)
	I0310 21:03:26.703456   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.712924   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.7299874s)
	I0310 21:03:26.713207   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.729549   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.4455915s)
	I0310 21:03:26.730146   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.745350   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (2.644487s)
	I0310 21:03:26.746090   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:03:26.789790   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:28.208057   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:29.453108   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:31.523399   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:32.576629   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:33.683104   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:34.697126   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:36.129079   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:37.496472   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:38.705333   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:40.067890   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:41.254123   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:42.775257   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:43.781491   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:43.782484   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:43.782728   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	I0310 21:03:44.323939   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:45.444282   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:46.917356   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:47.922614   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:47.923055   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:47.923055   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	I0310 21:03:48.190647   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:48.971334   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:48.972399   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:48.972535   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700 (4096 bytes)
	I0310 21:03:49.275541   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:50.603252   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:51.768296   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:52.068495   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:52.072175   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:52.072582   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	I0310 21:03:53.640175   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:54.647698   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000906550}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:55.404127   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:55.404259   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:55.404516   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040 (4096 bytes)
	I0310 21:03:55.681840   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001611fe0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:03:56.281460   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:56.281460   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: NewSession: new client: new client: ssh: handshake failed: EOF
	W0310 21:03:56.281460   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:03:56.282092   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:03:56.282484   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372 (4096 bytes)
	I0310 21:03:56.284212   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800 (4096 bytes)
	I0310 21:03:57.083223   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00183e6a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:58.559712   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00187b2c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:03:59.701190   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0019148f0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:00.809789   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000ed6930}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:02.030637   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001840980}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:03.285752   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017651a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:04.389637   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001032450}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:05.527813   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000dddb00}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:06.716121   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00100c3d0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:08.083385   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0014cfd70}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:09.092704   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0014c2ba0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:10.596664   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a830e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:12.208874   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001db8610}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:13.548353   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001b77450}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:15.166358   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018dd900}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:16.604968   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000fbde50}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:17.623256   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00108ab60}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:19.069648   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00138c570}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:20.284873   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0011ec080}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:21.694036   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0013cfdc0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:23.026708   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001efa0a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:25.379034   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001bb6bf0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:26.580365   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001ca8df0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:28.113640   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018dc020}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:29.273024   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d36390}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:30.672582   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000c88360}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:31.771661   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0006d24e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:33.076278   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001468790}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:34.092114   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0014c2f80}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:35.306096   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d95830}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:36.899631   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000fbcf70}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:38.092994   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001bb6990}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:39.558591   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000907ba0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:40.944193   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f22ad0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:42.021056   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001033370}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:43.094208   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001931dc0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:44.212643   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000c954a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:45.561258   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00187a1b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:47.457455   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0013cead0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:48.556852   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001dd6220}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:50.069506   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001fed530}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:51.100729   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001ef69a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:52.206133   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001bb6ac0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:53.679664   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000907370}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:55.087145   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f23a70}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:56.128073   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f86f60}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:04:59.138337   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f0e890}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:00.626140   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018dcb70}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:02.335350   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0011eccb0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:03.558705   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001bed1c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:04.578802   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000fbd700}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:05.586469   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001f9e740}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:06.671030   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00100c970}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:07.747515   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001ef29e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:10.779777   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001654950}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:11.793618   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 21:05:11.804322   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 21:05:12.211314   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0009077b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:13.699442   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001ca9810}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:14.793796   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001bfd020}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:16.430555   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00143c390}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:17.546960   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00187a0a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:18.571538   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0014c2b60}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:19.631413   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d95160}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:21.682515   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a83480}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:23.249725   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001e628f0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:24.477991   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018b88d0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:25.649283   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0016c8b10}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:27.095239   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d76070}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:28.374264   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017a8ed0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:29.551433   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001032ca0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:30.596212   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d6de10}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:31.716240   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001bfdbb0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:33.220454   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018dc460}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:34.522110   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0011ec010}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:35.903834   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001045cb0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:37.376266   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000fbc320}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:38.640092   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d948f0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:39.775745   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012fe020}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:41.062039   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001bb6550}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:42.169097   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0016c9fe0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:43.768026   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d779a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:45.072974   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0021964a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:46.094405   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017a9d20}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:47.568371   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001032f50}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:48.677487   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:37 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.9 PodIP: PodIPs:[] StartTime:2021-03-10 21:03:41 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018dc660}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:05:50.513757   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:05:51.580208   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:05:52.720725   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:05:54.406552   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:05:55.653154   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:05:57.033872   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:05:58.190344   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:05:59.590339   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:00.776167   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:02.294881   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:03.710395   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:04.903772   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:06.485182   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:07.892544   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:09.115045   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:10.217585   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:11.254563   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:12.621860   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:14.068296   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:15.142326   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:17.409070   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:18.658650   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:20.053462   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:21.490259   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:22.641511   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:23.837431   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:25.106190   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:26.620695   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:31.447524   19212 pod_ready.go:102] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:41 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]}
	I0310 21:06:32.534618   19212 pod_ready.go:97] pod "coredns-74ff55c5b-dqrb4" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:06:25 +0000 GMT Reason: Message:}
	I0310 21:06:32.537173   19212 pod_ready.go:62] duration metric: took 3m41.0574223s to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	I0310 21:06:32.537173   19212 pod_ready.go:59] waiting 6m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	I0310 21:06:33.096195   19212 pod_ready.go:97] pod "etcd-default-k8s-different-port-20210310205202-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:56 +0000 GMT Reason: Message:}
	I0310 21:06:33.096195   19212 pod_ready.go:62] duration metric: took 558.783ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	I0310 21:06:33.096195   19212 pod_ready.go:59] waiting 6m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	I0310 21:06:33.826816   19212 pod_ready.go:97] pod "kube-apiserver-default-k8s-different-port-20210310205202-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:04:06 +0000 GMT Reason: Message:}
	I0310 21:06:33.827032   19212 pod_ready.go:62] duration metric: took 730.8387ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	I0310 21:06:33.827239   19212 pod_ready.go:59] waiting 6m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	I0310 21:06:34.098861   19212 pod_ready.go:97] pod "kube-controller-manager-default-k8s-different-port-20210310205202-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:34 +0000 GMT Reason: Message:}
	I0310 21:06:34.098861   19212 pod_ready.go:62] duration metric: took 271.4057ms to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	I0310 21:06:34.098861   19212 pod_ready.go:59] waiting 6m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	I0310 21:06:34.838572   19212 pod_ready.go:97] pod "kube-proxy-j2jg9" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:05:50 +0000 GMT Reason: Message:}
	I0310 21:06:34.839029   19212 pod_ready.go:62] duration metric: took 740.1688ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	I0310 21:06:34.839183   19212 pod_ready.go:59] waiting 6m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	I0310 21:06:35.074345   19212 pod_ready.go:97] pod "kube-scheduler-default-k8s-different-port-20210310205202-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:05:15 +0000 GMT Reason: Message:}
	I0310 21:06:35.074345   19212 pod_ready.go:62] duration metric: took 235.1629ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	I0310 21:06:35.074345   19212 pod_ready.go:39] duration metric: took 3m43.5948163s for extra waiting for kube-system core pods to be Ready ...
	I0310 21:06:35.074572   19212 api_server.go:48] waiting for apiserver process to appear ...
	I0310 21:06:35.082612   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 21:06:50.346748   19212 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3m47.5382607s)
	I0310 21:06:50.492399   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396: (3m37.1336526s)
	I0310 21:06:50.492698   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310083645-5040: (3m37.0165245s)
	I0310 21:06:50.492698   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210301195830-5700: (3m36.9723203s)
	I0310 21:06:50.498099   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: (3m26.6026806s)
	I0310 21:06:50.492698   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440: (3m36.9468361s)
	I0310 21:06:50.492698   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464: (3m36.9692482s)
	I0310 21:06:50.498099   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496': No such file or directory
	I0310 21:06:50.493180   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156: (3m37.0206843s)
	I0310 21:06:50.498099   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: (3m26.5343207s)
	I0310 21:06:50.498099   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516': No such file or directory
	I0310 21:06:50.493578   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520: (3m37.07616s)
	I0310 21:06:50.493578   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120214442-10992: (3m36.9360504s)
	I0310 21:06:50.493578   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024: (3m37.0901311s)
	I0310 21:06:50.493578   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552: (3m37.0879733s)
	I0310 21:06:50.498458   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	I0310 21:06:50.493888   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432: (3m36.8696179s)
	I0310 21:06:50.498458   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: (3m26.7182698s)
	I0310 21:06:50.498458   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	I0310 21:06:50.498458   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464': No such file or directory
	I0310 21:06:50.498863   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: (3m26.7142111s)
	I0310 21:06:50.498099   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: (3m26.7218063s)
	I0310 21:06:50.498863   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520': No such file or directory
	I0310 21:06:50.493888   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056: (3m36.4997229s)
	I0310 21:06:50.498863   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464 (4096 bytes)
	I0310 21:06:50.494226   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692: (3m36.695473s)
	I0310 21:06:50.498863   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520 (4096 bytes)
	I0310 21:06:50.494226   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210123004019-5372: (3m36.9068495s)
	I0310 21:06:50.494226   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920: (3m36.7978523s)
	I0310 21:06:50.494546   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736: (3m36.3565996s)
	I0310 21:06:50.494546   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588: (3m36.495662s)
	I0310 21:06:50.494546   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: (3m36.101014s)
	I0310 21:06:50.499314   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492': No such file or directory
	I0310 21:06:50.499314   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	I0310 21:06:50.494546   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944: (3m36.7946553s)
	I0310 21:06:50.495052   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452: (3m36.7294877s)
	I0310 21:06:50.495052   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516: (3m36.9012569s)
	I0310 21:06:50.495412   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352: (3m36.5271567s)
	I0310 21:06:50.495412   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: (3m36.3813028s)
	I0310 21:06:50.495412   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: (3m35.497939s)
	I0310 21:06:50.495412   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (3m27.097925s)
	I0310 21:06:50.495773   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: (3m26.9563846s)
	I0310 21:06:50.495773   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: (3m35.9807684s)
	I0310 21:06:50.495773   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210128021318-232: (3m36.355846s)
	I0310 21:06:50.496180   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: (3m27.9409803s)
	I0310 21:06:50.496180   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: (3m26.5716723s)
	I0310 21:06:50.496180   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172: (3m36.4663906s)
	I0310 21:06:50.496180   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (3m26.5324015s)
	I0310 21:06:50.496543   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310191609-6496: (3m36.8522833s)
	I0310 21:06:50.496543   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: (3m27.1318163s)
	I0310 21:06:50.496543   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (3m26.6205192s)
	I0310 21:06:50.496543   19212 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210224014800-800: (3m36.4009973s)
	I0310 21:06:50.497056   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (3m36.1334949s)
	I0310 21:06:50.497056   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: (3m26.8591995s)
	I0310 21:06:50.497420   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (3m26.5336417s)
	I0310 21:06:50.497420   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: (3m26.8538966s)
	I0310 21:06:50.497420   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (3m26.7181934s)
	I0310 21:06:50.497764   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (3m26.6616021s)
	I0310 21:06:50.497764   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: (3m26.7211884s)
	I0310 21:06:50.498458   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: (3m26.9092529s)
	I0310 21:06:50.498863   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992': No such file or directory
	I0310 21:06:50.499939   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992 (4096 bytes)
	I0310 21:06:50.500146   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512': No such file or directory
	I0310 21:06:50.500146   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172': No such file or directory
	I0310 21:06:50.500146   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352 (4096 bytes)
	I0310 21:06:50.500392   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	I0310 21:06:50.500146   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	I0310 21:06:50.500701   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	I0310 21:06:50.500392   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	I0310 21:06:50.500146   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552': No such file or directory
	I0310 21:06:50.501569   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552 (4096 bytes)
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920': No such file or directory
	I0310 21:06:50.500392   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692': No such file or directory
	I0310 21:06:50.500146   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396': No such file or directory
	I0310 21:06:50.502362   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396 (4096 bytes)
	I0310 21:06:50.502362   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944 (4096 bytes)
	I0310 21:06:50.502615   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052 (4096 bytes)
	I0310 21:06:50.502615   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056 (4096 bytes)
	I0310 21:06:50.502615   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	I0310 21:06:50.502615   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	I0310 21:06:50.502615   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440 (4096 bytes)
	I0310 21:06:50.502615   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	I0310 21:06:50.502615   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692 (4096 bytes)
	I0310 21:06:50.502615   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	I0310 21:06:50.502615   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	I0310 21:06:50.509024   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024 (4096 bytes)
	I0310 21:06:50.523180   19212 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3m45.0274691s)
	I0310 21:06:50.534678   19212 out.go:129] * Enabled addons: storage-provisioner, default-storageclass
	I0310 21:06:50.535544   19212 addons.go:383] enableAddons completed in 4m2.6228477s
	W0310 21:06:51.084698   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.084698   19212 retry.go:31] will retry after 149.242379ms: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 200.227965ms: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 253.803157ms: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 328.409991ms: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 178.565968ms: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 220.164297ms: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 204.514543ms: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 242.222461ms: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 195.758538ms: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 294.771169ms: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 198.275464ms: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 179.638263ms: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:06:51.085225   19212 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 215.217854ms: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.085225   19212 retry.go:31] will retry after 175.796719ms: ssh: rejected: connect failed (open failed)
	I0310 21:06:51.245362   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.274488   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.275493   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.277617   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.311503   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.325273   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.331065   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.350837   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.356686   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.362507   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.367975   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.367975   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.408818   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:51.442999   19212 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496
	I0310 21:06:52.363846   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.0862313s)
	I0310 21:06:52.364333   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.379268   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.1339082s)
	I0310 21:06:52.379480   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.569755   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.2578225s)
	I0310 21:06:52.569755   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.582446   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.3074729s)
	I0310 21:06:52.582652   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.599944   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.2319709s)
	I0310 21:06:52.599944   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.24326s)
	I0310 21:06:52.599944   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.600267   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.1914508s)
	I0310 21:06:52.600267   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.2375882s)
	I0310 21:06:52.600267   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.600267   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.2491095s)
	I0310 21:06:52.600267   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.600710   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.600979   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.605001   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.2797304s)
	I0310 21:06:52.605001   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.610732   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.1677355s)
	I0310 21:06:52.610732   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.641443   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.3659524s)
	I0310 21:06:52.641952   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.651534   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.2835607s)
	I0310 21:06:52.651737   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	I0310 21:06:52.653499   19212 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20210310205202-6496: (1.3219961s)
	I0310 21:06:52.653681   19212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55156 SSHKeyPath:C:\Users\jenkins\.minikube\machines\default-k8s-different-port-20210310205202-6496\id_rsa Username:docker}
	W0310 21:06:58.369791   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:06:58.501274   19212 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:07:13.531609   19212 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: (3m49.5961132s)
	I0310 21:07:13.531609   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: (2m1.7275331s)
	I0310 21:07:13.531609   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 from cache
	I0310 21:07:13.531609   19212 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588': No such file or directory
	I0310 21:07:13.532357   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:07:13.531609   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (38.4487654s)
	I0310 21:07:13.532357   19212 logs.go:255] 1 containers: [44043b6a8198]
	I0310 21:07:13.532357   19212 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588 (4096 bytes)
	I0310 21:07:13.541883   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:07:13.542280   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 21:07:38.206993   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (24.6647575s)
	I0310 21:07:38.207376   19212 logs.go:255] 1 containers: [69efae781c0b]
	I0310 21:07:38.216356   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 21:07:47.643947   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (9.4276075s)
	I0310 21:07:47.644194   19212 logs.go:255] 1 containers: [0ce70105ef45]
	I0310 21:07:47.663416   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 21:07:47.677197   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (34.1353759s)
	I0310 21:07:47.678275   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 from cache
	I0310 21:07:47.678275   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:07:47.688127   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:07:54.585587   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (6.9221834s)
	I0310 21:07:54.586154   19212 logs.go:255] 1 containers: [cc170dc9a3a5]
	I0310 21:07:54.600786   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 21:07:59.376991   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (4.7759747s)
	I0310 21:07:59.376991   19212 logs.go:255] 1 containers: [6cc1ac0f0822]
	I0310 21:07:59.383899   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 21:07:59.389297   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (11.7010907s)
	I0310 21:07:59.389297   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 from cache
	I0310 21:07:59.389619   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:07:59.399349   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:08:11.061210   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: (11.6616371s)
	I0310 21:08:11.061593   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (11.6777146s)
	I0310 21:08:11.061593   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 from cache
	I0310 21:08:11.061593   19212 logs.go:255] 0 containers: []
	W0310 21:08:11.061593   19212 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 21:08:11.061913   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:08:11.070743   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 21:08:11.075568   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:08:17.732612   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (6.6618809s)
	I0310 21:08:17.732612   19212 logs.go:255] 1 containers: [af28c0367661]
	I0310 21:08:17.742082   19212 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 21:08:28.504088   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (17.4282608s)
	I0310 21:08:28.504088   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 from cache
	I0310 21:08:28.504088   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 21:08:28.504088   19212 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (10.7620249s)
	I0310 21:08:28.504547   19212 logs.go:255] 2 containers: [bf37cfa32c85 92f2244695b6]
	I0310 21:08:28.504547   19212 logs.go:122] Gathering logs for dmesg ...
	I0310 21:08:28.504547   19212 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 21:08:28.512444   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 21:08:31.234883   19212 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (2.7303403s)
	I0310 21:08:31.238386   19212 logs.go:122] Gathering logs for describe nodes ...
	I0310 21:08:31.238386   19212 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 21:08:54.053353   19212 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (22.8150065s)
	I0310 21:08:54.055410   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (25.5430092s)
	I0310 21:08:54.055410   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 from cache
	I0310 21:08:54.055410   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:08:54.057360   19212 logs.go:122] Gathering logs for etcd [69efae781c0b] ...
	I0310 21:08:54.057360   19212 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 69efae781c0b"
	I0310 21:08:54.066951   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:09:06.430465   19212 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 69efae781c0b": (12.3731263s)
	I0310 21:09:06.471630   19212 logs.go:122] Gathering logs for kube-proxy [6cc1ac0f0822] ...
	I0310 21:09:06.472756   19212 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 6cc1ac0f0822"
	I0310 21:09:16.505881   19212 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 6cc1ac0f0822": (10.0321144s)
	I0310 21:09:16.507123   19212 logs.go:122] Gathering logs for kube-controller-manager [92f2244695b6] ...
	I0310 21:09:16.507123   19212 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 92f2244695b6"
	I0310 21:09:16.551143   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: (22.4842294s)
	I0310 21:09:16.551143   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 from cache
	I0310 21:09:16.551639   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:09:16.560350   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:09:23.080183   19212 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 92f2244695b6": (6.5730702s)
	I0310 21:09:23.096940   19212 logs.go:122] Gathering logs for kubelet ...
	I0310 21:09:23.096940   19212 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 21:09:34.204398   19212 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (11.1074765s)
	I0310 21:09:34.217803   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: (17.6574813s)
	I0310 21:09:34.217803   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 from cache
	I0310 21:09:34.217803   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:09:34.224811   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:09:34.294721   19212 logs.go:122] Gathering logs for coredns [0ce70105ef45] ...
	I0310 21:09:34.294721   19212 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 0ce70105ef45"
	I0310 21:09:49.780168   19212 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 0ce70105ef45": (15.4854724s)
	I0310 21:09:49.782570   19212 logs.go:122] Gathering logs for kube-scheduler [cc170dc9a3a5] ...
	I0310 21:09:49.782570   19212 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cc170dc9a3a5"
	I0310 21:09:49.837951   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (15.6131652s)
	I0310 21:09:49.838767   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 from cache
	I0310 21:09:49.838767   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:09:49.846843   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:09:58.826666   19212 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 cc170dc9a3a5": (9.0441099s)
	I0310 21:09:58.843037   19212 logs.go:122] Gathering logs for storage-provisioner [af28c0367661] ...
	I0310 21:09:58.843037   19212 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 af28c0367661"
	I0310 21:10:20.082887   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: (30.2353673s)
	I0310 21:10:20.082887   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 from cache
	I0310 21:10:20.082887   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:10:20.083345   19212 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 af28c0367661": (21.2403405s)
	I0310 21:10:20.085667   19212 logs.go:122] Gathering logs for kube-controller-manager [bf37cfa32c85] ...
	I0310 21:10:20.086154   19212 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 bf37cfa32c85"
	I0310 21:10:20.092804   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:10:31.800305   19212 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 bf37cfa32c85": (11.7141687s)
	I0310 21:10:31.818991   19212 logs.go:122] Gathering logs for Docker ...
	I0310 21:10:31.818991   19212 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 21:10:38.742397   19212 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (6.9234166s)
	I0310 21:10:38.754483   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: (18.6617066s)
	I0310 21:10:38.754483   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 from cache
	I0310 21:10:38.754483   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:10:38.763208   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:10:38.763208   19212 logs.go:122] Gathering logs for container status ...
	I0310 21:10:38.763208   19212 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 21:10:57.591459   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: (18.8279518s)
	I0310 21:10:57.591595   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 from cache
	I0310 21:10:57.591595   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 21:10:57.591595   19212 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (18.8284152s)
	I0310 21:10:57.592133   19212 logs.go:122] Gathering logs for kube-apiserver [44043b6a8198] ...
	I0310 21:10:57.592133   19212 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 44043b6a8198"
	I0310 21:10:57.592133   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 21:11:17.335828   19212 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 44043b6a8198": (19.7437241s)
	I0310 21:11:17.357601   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: (19.7654973s)
	I0310 21:11:17.357601   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 from cache
	I0310 21:11:17.357601   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:11:17.364607   19212 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:11:19.884671   19212 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 21:11:32.268341   19212 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (12.3832464s)
	I0310 21:11:32.268341   19212 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: (14.9037566s)
	I0310 21:11:32.268341   19212 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 from cache
	I0310 21:11:32.268341   19212 api_server.go:68] duration metric: took 8m44.3586493s to wait for apiserver process to appear ...
	I0310 21:11:32.268341   19212 api_server.go:84] waiting for apiserver healthz status ...
	I0310 21:11:32.268341   19212 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:11:32.273248   19212 out.go:129] 
	W0310 21:11:32.273953   19212 out.go:191] X Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0310 21:11:32.273953   19212 out.go:191] * 
	* 
	W0310 21:11:32.273953   19212 out.go:191] * If the above advice does not help, please let us know: 
	* If the above advice does not help, please let us know: 
	W0310 21:11:32.274337   19212 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:11:32.277683   19212 out.go:129] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:157: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p default-k8s-different-port-20210310205202-6496 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.20.2": exit status 80
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect default-k8s-different-port-20210310205202-6496
helpers_test.go:231: (dbg) docker inspect default-k8s-different-port-20210310205202-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63",
	        "Created": "2021-03-10T20:52:32.7671922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 234137,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:52:38.8381413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/hosts",
	        "LogPath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63-json.log",
	        "Name": "/default-k8s-different-port-20210310205202-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210310205202-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210310205202-6496",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210310205202-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210310205202-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210310205202-6496",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210310205202-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ae5c85c25a3c7d419e72b038d8b798aa18b164a3a6e0b21ae322faf61bd068b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55156"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55155"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55154"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55153"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7ae5c85c25a3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "7194d14c86ce93fa12517fc138a3f9fa7090df0ea11ec55d163012ce5bbfba6d",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.9",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:09",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "7194d14c86ce93fa12517fc138a3f9fa7090df0ea11ec55d163012ce5bbfba6d",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.9",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:09",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496: (28.5582465s)
helpers_test.go:240: <<< TestStartStop/group/default-k8s-different-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-different-port-20210310205202-6496 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-different-port-20210310205202-6496 logs -n 25: (4m12.181985s)
helpers_test.go:248: TestStartStop/group/default-k8s-different-port/serial/FirstStart logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:52:50 UTC, end at Wed 2021-03-10 21:13:41 UTC. --
	* Mar 10 20:56:27 default-k8s-different-port-20210310205202-6496 systemd[1]: Starting Docker Application Container Engine...
	* Mar 10 20:56:28 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:28.852406700Z" level=info msg="Starting up"
	* Mar 10 20:56:28 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:28.961772500Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:56:28 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:28.961851400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:56:28 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:28.961902100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:56:28 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:28.961934300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.103908400Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.125766900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.130829100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.131104300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.575176400Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.746203500Z" level=info msg="Loading containers: start."
	* Mar 10 20:56:33 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:33.001193200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 20:56:34 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:34.156088300Z" level=info msg="Loading containers: done."
	* Mar 10 20:56:34 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:34.887786200Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 20:56:34 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:34.887917600Z" level=info msg="Daemon has completed initialization"
	* Mar 10 20:56:35 default-k8s-different-port-20210310205202-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 20:56:35 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:35.920699200Z" level=info msg="API listen on [::]:2376"
	* Mar 10 20:56:36 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:36.104917000Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 21:00:22 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:00:22.007204800Z" level=info msg="ignoring event" container=92f2244695b68ec5eaa63f7b57f392115892164fdf6a258ac5efb8aeae302062 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:04:47 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:47.006756600Z" level=error msg="stream copy error: reading from a closed fifo"
	* Mar 10 21:04:47 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:47.014853400Z" level=error msg="stream copy error: reading from a closed fifo"
	* Mar 10 21:04:50 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:50.700968500Z" level=error msg="93932b9ba010da072ccf1f4a473432df0e38f09776fcf739444b815cc718eadc cleanup: failed to delete container from containerd: no such container"
	* Mar 10 21:04:50 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:50.701074500Z" level=error msg="Handler for POST /v1.40/containers/93932b9ba010da072ccf1f4a473432df0e38f09776fcf739444b815cc718eadc/start returned error: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: read init-p: connection reset by peer: unknown"
	* Mar 10 21:13:09 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:13:09.324341200Z" level=info msg="ignoring event" container=af28c036766178fda8a8c02586fbc343f12e3fea03619d61d1a225860811ea29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* time="2021-03-10T21:13:49Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
	* CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS                            PORTS     NAMES
	* af28c0367661   85069258b98a           "/storage-provisioner"   6 minutes ago    Exited (255) About a minute ago             k8s_storage-provisioner_storage-provisioner_kube-system_5750970b-b6e6-4283-839d-d9eaddeb5c46_0
	* ea898f38edc8   k8s.gcr.io/pause:3.2   "/pause"                 6 minutes ago    Up 6 minutes                                k8s_POD_storage-provisioner_kube-system_5750970b-b6e6-4283-839d-d9eaddeb5c46_0
	* 0ce70105ef45   bfe3a36ebd25           "/coredns -conf /etc???"   8 minutes ago    Up 8 minutes                                k8s_coredns_coredns-74ff55c5b-dqrb4_kube-system_ff13fd84-adb0-4d67-b692-1cd77970a503_0
	* 6cc1ac0f0822   43154ddb57a8           "/usr/local/bin/kube???"   8 minutes ago    Up 8 minutes                                k8s_kube-proxy_kube-proxy-j2jg9_kube-system_a3410f60-325a-4238-b6b8-5796e704f418_0
	* 75a0f76d075a   k8s.gcr.io/pause:3.2   "/pause"                 10 minutes ago   Up 8 minutes                                k8s_POD_coredns-74ff55c5b-dqrb4_kube-system_ff13fd84-adb0-4d67-b692-1cd77970a503_0
	* b2f343ffc28d   k8s.gcr.io/pause:3.2   "/pause"                 10 minutes ago   Up 9 minutes                                k8s_POD_kube-proxy-j2jg9_kube-system_a3410f60-325a-4238-b6b8-5796e704f418_0
	* bf37cfa32c85   a27166429d98           "kube-controller-man???"   13 minutes ago   Up 12 minutes                               k8s_kube-controller-manager_kube-controller-manager-default-k8s-different-port-20210310205202-6496_kube-system_57b8c22dbe6410e4bd36cf14b0f8bdc7_1
	* cc170dc9a3a5   ed2c44fbdd78           "kube-scheduler --au???"   15 minutes ago   Up 14 minutes                               k8s_kube-scheduler_kube-scheduler-default-k8s-different-port-20210310205202-6496_kube-system_6b4a0ee8b3d15a1c2e47c15d32e6eb0d_0
	* 92f2244695b6   a27166429d98           "kube-controller-man???"   15 minutes ago   Exited (255) 13 minutes ago                 k8s_kube-controller-manager_kube-controller-manager-default-k8s-different-port-20210310205202-6496_kube-system_57b8c22dbe6410e4bd36cf14b0f8bdc7_0
	* 44043b6a8198   a8c2fdb8bf76           "kube-apiserver --ad???"   15 minutes ago   Up 15 minutes                               k8s_kube-apiserver_kube-apiserver-default-k8s-different-port-20210310205202-6496_kube-system_c64be9e65fa7c2ee1e9273e853456e9c_0
	* 69efae781c0b   0369cf4303ff           "etcd --advertise-cl???"   15 minutes ago   Up 15 minutes                               k8s_etcd_etcd-default-k8s-different-port-20210310205202-6496_kube-system_943f625e1cc85924361727601bce138b_0
	* 4e85db81a5fe   k8s.gcr.io/pause:3.2   "/pause"                 15 minutes ago   Up 15 minutes                               k8s_POD_kube-scheduler-default-k8s-different-port-20210310205202-6496_kube-system_6b4a0ee8b3d15a1c2e47c15d32e6eb0d_0
	* 656bdeac8baf   k8s.gcr.io/pause:3.2   "/pause"                 15 minutes ago   Up 15 minutes                               k8s_POD_kube-controller-manager-default-k8s-different-port-20210310205202-6496_kube-system_57b8c22dbe6410e4bd36cf14b0f8bdc7_0
	* 9c8384d7f0b5   k8s.gcr.io/pause:3.2   "/pause"                 15 minutes ago   Up 15 minutes                               k8s_POD_kube-apiserver-default-k8s-different-port-20210310205202-6496_kube-system_c64be9e65fa7c2ee1e9273e853456e9c_0
	* 6ab27d612d9e   k8s.gcr.io/pause:3.2   "/pause"                 15 minutes ago   Up 15 minutes                               k8s_POD_etcd-default-k8s-different-port-20210310205202-6496_kube-system_943f625e1cc85924361727601bce138b_0
	* 
	* ==> coredns [0ce70105ef45] <==
	* I0310 21:05:53.292911       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:32.2685215 +0000 UTC m=+1.508836001) (total time: 21.0144185s):
	* Trace[2019727887]: [21.0144185s] [21.0144185s] END
	* E0310 21:05:53.293004       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:05:53.293061       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:32.268399 +0000 UTC m=+1.508713501) (total time: 21.0250283s):
	* Trace[1427131847]: [21.0250283s] [21.0250283s] END
	* E0310 21:05:53.293070       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:05:53.293100       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:32.2658815 +0000 UTC m=+1.506196001) (total time: 21.0277142s):
	* Trace[939984059]: [21.0277142s] [21.0277142s] END
	* E0310 21:05:53.293107       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:06:15.187778       1 trace.go:116] Trace[336122540]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:54.1636976 +0000 UTC m=+23.405543501) (total time: 21.0224514s):
	* Trace[336122540]: [21.0224514s] [21.0224514s] END
	* E0310 21:06:15.189266       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:06:15.712274       1 trace.go:116] Trace[646203300]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:54.6928835 +0000 UTC m=+23.934729401) (total time: 21.0177612s):
	* Trace[646203300]: [21.0177612s] [21.0177612s] END
	* E0310 21:06:15.712320       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:06:15.712513       1 trace.go:116] Trace[1747278511]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:54.6917495 +0000 UTC m=+23.933595401) (total time: 21.0192089s):
	* Trace[1747278511]: [21.0192089s] [21.0192089s] END
	* E0310 21:06:15.712528       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210310205202-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=default-k8s-different-port-20210310205202-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=default-k8s-different-port-20210310205202-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T21_01_52_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 21:00:46 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  default-k8s-different-port-20210310205202-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 21:15:01 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 21:11:20 +0000   Wed, 10 Mar 2021 21:00:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 21:11:20 +0000   Wed, 10 Mar 2021 21:00:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 21:11:20 +0000   Wed, 10 Mar 2021 21:00:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 21:11:20 +0000   Wed, 10 Mar 2021 21:03:41 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.9
	*   Hostname:    default-k8s-different-port-20210310205202-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                08addf25-0ddf-4c24-98ff-7ed3332985b4
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-74ff55c5b-dqrb4                                                   100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	*   kube-system                 etcd-default-k8s-different-port-20210310205202-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	*   kube-system                 kube-apiserver-default-k8s-different-port-20210310205202-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         11m
	*   kube-system                 kube-controller-manager-default-k8s-different-port-20210310205202-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	*   kube-system                 kube-proxy-j2jg9                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	*   kube-system                 kube-scheduler-default-k8s-different-port-20210310205202-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	*   kube-system                 storage-provisioner                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age    From        Message
	*   ----    ------                   ----   ----        -------
	*   Normal  Starting                 12m    kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  12m    kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    12m    kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     12m    kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             12m    kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  11m    kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                11m    kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeReady
	*   Normal  Starting                 9m23s  kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [69efae781c0b] <==
	* 2021-03-10 21:14:17.070748 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:14:18.994520 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true " with result "range_response_count:0 size:5" took too long (147.7975ms) to execute
	* 2021-03-10 21:14:21.517672 W | etcdserver: request "header:<ID:11303041234760730131 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:769 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905954321 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (577.3441ms) to execute
	* 2021-03-10 21:14:21.929483 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.0009869s) to execute
	* 2021-03-10 21:14:21.956599 W | etcdserver: request "header:<ID:11303041234760730133 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210310205202-6496\" mod_revision:753 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210310205202-6496\" value_size:7191 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210310205202-6496\" > >>" with result "size:16" took too long (369.6447ms) to execute
	* 2021-03-10 21:14:21.988681 W | etcdserver: read-only range request "key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true " with result "range_response_count:0 size:5" took too long (291.5765ms) to execute
	* 2021-03-10 21:14:29.957391 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:14:30.166380 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (117.7201ms) to execute
	* 2021-03-10 21:14:33.676573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:14:38.330640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:14:41.403735 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (176.7144ms) to execute
	* 2021-03-10 21:14:41.793250 W | etcdserver: request "header:<ID:11303041234760730185 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:774 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905954373 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (350.1344ms) to execute
	* 2021-03-10 21:14:42.115392 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (279.9906ms) to execute
	* 2021-03-10 21:14:42.181051 W | etcdserver: request "header:<ID:11303041234760730187 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-g5v6m\" mod_revision:765 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-g5v6m\" value_size:1365 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-g5v6m\" > >>" with result "size:16" took too long (228.8131ms) to execute
	* 2021-03-10 21:14:42.347104 W | etcdserver: request "header:<ID:11303041234760730188 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" mod_revision:763 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" value_size:3642 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" > >>" with result "size:16" took too long (165.8344ms) to execute
	* 2021-03-10 21:14:49.118895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:14:51.186194 W | etcdserver: read-only range request "key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (104.8038ms) to execute
	* 2021-03-10 21:14:54.632892 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (310.0875ms) to execute
	* 2021-03-10 21:14:55.862963 W | etcdserver: request "header:<ID:11303041234760730221 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:779 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905954411 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (255.9282ms) to execute
	* 2021-03-10 21:14:59.470437 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:15:00.654271 W | etcdserver: request "header:<ID:11303041234760730243 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-default-k8s-different-port-20210310205202-6496.166b179132e4c88c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-default-k8s-different-port-20210310205202-6496.166b179132e4c88c\" value_size:883 lease:2079669197905954400 >> failure:<>>" with result "size:16" took too long (225.2519ms) to execute
	* 2021-03-10 21:15:05.050835 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:15:06.337380 W | etcdserver: read-only range request "key:\"/registry/minions/default-k8s-different-port-20210310205202-6496\" " with result "range_response_count:1 size:5855" took too long (119.1412ms) to execute
	* 2021-03-10 21:15:09.848877 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (144.8977ms) to execute
	* 2021-03-10 21:15:15.042339 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  21:15:17 up  2:15,  0 users,  load average: 171.15, 162.89, 150.19
	* Linux default-k8s-different-port-20210310205202-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [44043b6a8198] <==
	* Trace[1118760236]: ---"Transaction committed" 337ms (21:14:00.898)
	* Trace[1118760236]: [1.384467s] [1.384467s] END
	* I0310 21:14:56.954323       1 trace.go:205] Trace[57777796]: "List etcd3" key:/masterleases/,resourceVersion:0,resourceVersionMatch:NotOlderThan,limit:0,continue: (10-Mar-2021 21:14:56.121) (total time: 832ms):
	* Trace[57777796]: [832.7867ms] [832.7867ms] END
	* I0310 21:15:00.838251       1 trace.go:205] Trace[415278771]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:15:00.190) (total time: 647ms):
	* Trace[415278771]: ---"Transaction committed" 447ms (21:15:00.838)
	* Trace[415278771]: [647.1751ms] [647.1751ms] END
	* I0310 21:15:02.035842       1 trace.go:205] Trace[1851298292]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-different-port-20210310205202-6496,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:172.17.0.9 (10-Mar-2021 21:15:01.442) (total time: 593ms):
	* Trace[1851298292]: ---"About to convert to expected version" 358ms (21:15:00.801)
	* Trace[1851298292]: ---"Object stored in database" 234ms (21:15:00.035)
	* Trace[1851298292]: [593.2777ms] [593.2777ms] END
	* I0310 21:15:14.007956       1 trace.go:205] Trace[1523337054]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-different-port-20210310205202-6496,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:172.17.0.9 (10-Mar-2021 21:15:13.493) (total time: 513ms):
	* Trace[1523337054]: ---"About to convert to expected version" 397ms (21:15:00.891)
	* Trace[1523337054]: ---"Object stored in database" 115ms (21:15:00.007)
	* Trace[1523337054]: [513.9611ms] [513.9611ms] END
	* I0310 21:15:17.428954       1 trace.go:205] Trace[1487269418]: "Patch" url:/api/v1/namespaces/kube-system/pods/storage-provisioner/status,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:172.17.0.9 (10-Mar-2021 21:15:15.925) (total time: 1503ms):
	* Trace[1487269418]: ---"Recorded the audit event" 1372ms (21:15:00.297)
	* Trace[1487269418]: ---"About to check admission control" 120ms (21:15:00.418)
	* Trace[1487269418]: [1.5037797s] [1.5037797s] END
	* I0310 21:15:20.390148       1 trace.go:205] Trace[726678933]: "Get" url:/api/v1/namespaces/kube-public,user-agent:kube-apiserver/v1.20.2 (linux/amd64) kubernetes/faecb19,client:127.0.0.1 (10-Mar-2021 21:15:19.511) (total time: 878ms):
	* Trace[726678933]: ---"About to write a response" 878ms (21:15:00.389)
	* Trace[726678933]: [878.8171ms] [878.8171ms] END
	* I0310 21:15:20.397919       1 trace.go:205] Trace[1363333089]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:15:19.455) (total time: 942ms):
	* Trace[1363333089]: ---"Transaction committed" 865ms (21:15:00.352)
	* Trace[1363333089]: [942.4613ms] [942.4613ms] END
	* 
	* ==> kube-controller-manager [92f2244695b6] <==
	* 	/usr/local/go/src/net/net.go:182 +0x8e
	* crypto/tls.(*atLeastReader).Read(0xc000d7c920, 0xc000daf8c0, 0x205, 0x205, 0x40, 0x45, 0xc000da9130)
	* 	/usr/local/go/src/crypto/tls/conn.go:779 +0x62
	* bytes.(*Buffer).ReadFrom(0xc00045d780, 0x4d9ed80, 0xc000d7c920, 0x40bd05, 0x3f475a0, 0x464b8a0)
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xb1
	* crypto/tls.(*Conn).readFromUntil(0xc00045d500, 0x4da5040, 0xc00000e9d0, 0x5, 0xc00000e9d0, 0xc000da9238)
	* 	/usr/local/go/src/crypto/tls/conn.go:801 +0xf3
	* crypto/tls.(*Conn).readRecordOrCCS(0xc00045d500, 0xc000da9600, 0x6143d7, 0xc000dce480)
	* 	/usr/local/go/src/crypto/tls/conn.go:608 +0x115
	* crypto/tls.(*Conn).readRecord(...)
	* 	/usr/local/go/src/crypto/tls/conn.go:576
	* crypto/tls.(*Conn).readHandshake(0xc00045d500, 0xc000058000, 0xc000da9768, 0x48e91b, 0x48c4fa)
	* 	/usr/local/go/src/crypto/tls/conn.go:992 +0x6d
	* crypto/tls.(*serverHandshakeStateTLS13).readClientCertificate(0xc000da9aa0, 0x8e6, 0x0)
	* 	/usr/local/go/src/crypto/tls/handshake_server_tls13.go:770 +0x170
	* crypto/tls.(*serverHandshakeStateTLS13).handshake(0xc000da9aa0, 0xc000dac400, 0x0)
	* 	/usr/local/go/src/crypto/tls/handshake_server_tls13.go:71 +0x12a
	* crypto/tls.(*Conn).serverHandshake(0xc00045d500, 0xc000d4ad60, 0xf)
	* 	/usr/local/go/src/crypto/tls/handshake_server.go:50 +0xbc
	* crypto/tls.(*Conn).Handshake(0xc00045d500, 0x0, 0x0)
	* 	/usr/local/go/src/crypto/tls/conn.go:1362 +0xc9
	* net/http.(*conn).serve(0xc000da2320, 0x4e10da0, 0xc001044120)
	* 	/usr/local/go/src/net/http/server.go:1817 +0x1a5
	* created by net/http.(*Server).Serve
	* 	/usr/local/go/src/net/http/server.go:2969 +0x36c
	* 
	* ==> kube-controller-manager [bf37cfa32c85] <==
	* I0310 21:02:23.191614       1 shared_informer.go:247] Caches are synced for attach detach 
	* I0310 21:02:23.258138       1 shared_informer.go:247] Caches are synced for stateful set 
	* I0310 21:02:23.258194       1 shared_informer.go:247] Caches are synced for endpoint 
	* I0310 21:02:23.305391       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	* I0310 21:02:23.351277       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 21:02:23.374187       1 disruption.go:339] Sending events to api server.
	* I0310 21:02:23.374161       1 shared_informer.go:247] Caches are synced for deployment 
	* I0310 21:02:23.394635       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:02:23.394684       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:02:23.504250       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	* I0310 21:02:24.704508       1 range_allocator.go:373] Set node default-k8s-different-port-20210310205202-6496 PodCIDR to [10.244.0.0/24]
	* I0310 21:02:26.031533       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 21:02:29.347666       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:02:29.389687       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:02:29.389720       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 21:02:32.693450       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	* E0310 21:02:33.459579       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	* I0310 21:02:34.596622       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-j2jg9"
	* I0310 21:02:36.319712       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-ghd59"
	* I0310 21:02:37.223826       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-dqrb4"
	* E0310 21:02:37.264023       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"61730e88-17b1-4e14-b6aa-8324d9c0be38", ResourceVersion:"289", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751006911, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0014008c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014008e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v
1.LabelSelector)(0xc001400900), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.
GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0013614c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014
00920), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolum
eSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001400940), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil
), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001400980)}}, Resources:v1
.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0011588a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), Restart
Policy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00106f8f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005dfdc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runti
meClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000efc8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00106f958)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* I0310 21:02:48.126395       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 21:02:48.838406       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	* I0310 21:02:50.262753       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-ghd59"
	* I0310 21:03:48.218630       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-proxy [6cc1ac0f0822] <==
	* I0310 21:05:45.349979       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 21:05:45.350166       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 21:05:45.350251       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 21:05:45.351180       1 config.go:315] Starting service config controller
	* I0310 21:05:45.351241       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 21:05:45.371679       1 config.go:224] Starting endpoint slice config controller
	* I0310 21:05:45.371697       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 21:05:45.594139       1 shared_informer.go:247] Caches are synced for service config 
	* I0310 21:05:45.597686       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 21:06:47.869870       1 trace.go:205] Trace[1945169110]: "iptables restore" (10-Mar-2021 21:06:45.518) (total time: 2351ms):
	* Trace[1945169110]: [2.3510142s] [2.3510142s] END
	* I0310 21:13:20.822696       1 trace.go:205] Trace[2076198728]: "iptables Monitor CANARY check" (10-Mar-2021 21:13:17.371) (total time: 3451ms):
	* Trace[2076198728]: [3.4511338s] [3.4511338s] END
	* I0310 21:13:56.948661       1 trace.go:205] Trace[1591691340]: "iptables save" (10-Mar-2021 21:13:54.590) (total time: 2358ms):
	* Trace[1591691340]: [2.3583072s] [2.3583072s] END
	* I0310 21:14:00.789358       1 trace.go:205] Trace[991973627]: "iptables restore" (10-Mar-2021 21:13:57.015) (total time: 3773ms):
	* Trace[991973627]: [3.7737781s] [3.7737781s] END
	* I0310 21:14:22.865679       1 trace.go:205] Trace[1870764284]: "iptables save" (10-Mar-2021 21:14:16.697) (total time: 6167ms):
	* Trace[1870764284]: [6.1678652s] [6.1678652s] END
	* I0310 21:14:25.660230       1 trace.go:205] Trace[294614200]: "iptables Monitor CANARY check" (10-Mar-2021 21:14:23.004) (total time: 2655ms):
	* Trace[294614200]: [2.6555067s] [2.6555067s] END
	* I0310 21:15:02.786797       1 trace.go:205] Trace[480409953]: "iptables restore" (10-Mar-2021 21:15:00.105) (total time: 2680ms):
	* Trace[480409953]: [2.6809324s] [2.6809324s] END
	* I0310 21:15:25.164398       1 trace.go:205] Trace[744657260]: "iptables restore" (10-Mar-2021 21:15:23.094) (total time: 2069ms):
	* Trace[744657260]: [2.0694799s] [2.0694799s] END
	* 
	* ==> kube-scheduler [cc170dc9a3a5] <==
	* E0310 21:00:54.930498       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 21:00:54.978355       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 21:00:55.179297       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 21:00:55.467249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:00:55.496015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:00:55.518162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 21:00:55.997465       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:00:56.099435       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:00:56.403617       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 21:00:56.953691       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 21:01:02.276457       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 21:01:02.730159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:01:02.737344       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 21:01:04.083495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:01:04.099290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:01:04.109238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:01:04.756195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 21:01:04.793252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 21:01:05.070358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 21:01:05.637368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 21:01:07.793619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 21:01:07.977719       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 21:01:19.616631       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I0310 21:02:07.860900       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* http2: server: error reading preface from client 127.0.0.1:33206: read tcp 127.0.0.1:10259->127.0.0.1:33206: read: connection reset by peer
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:52:50 UTC, end at Wed 2021-03-10 21:16:00 UTC. --
	* Mar 10 21:06:42 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[362336038]: [7.3970518s] [7.3970518s] END
	* Mar 10 21:07:11 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:07:11.092085    3401 topology_manager.go:187] [topologymanager] Topology Admit Handler
	* Mar 10 21:07:11 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:07:11.256070    3401 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-bxkdg" (UniqueName: "kubernetes.io/secret/5750970b-b6e6-4283-839d-d9eaddeb5c46-storage-provisioner-token-bxkdg") pod "storage-provisioner" (UID: "5750970b-b6e6-4283-839d-d9eaddeb5c46")
	* Mar 10 21:07:11 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:07:11.256157    3401 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/5750970b-b6e6-4283-839d-d9eaddeb5c46-tmp") pod "storage-provisioner" (UID: "5750970b-b6e6-4283-839d-d9eaddeb5c46")
	* Mar 10 21:07:39 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:07:39.183389    3401 pod_container_deletor.go:79] Container "ea898f38edc882f701139c31ece3e47d70b626cd781545ca904984e34283d2fd" not found in pod's containers
	* Mar 10 21:07:48 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:07:48.970973    3401 remote_runtime.go:332] ContainerStatus "af28c036766178fda8a8c02586fbc343f12e3fea03619d61d1a225860811ea29" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: af28c036766178fda8a8c02586fbc343f12e3fea03619d61d1a225860811ea29
	* Mar 10 21:07:48 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:07:48.971948    3401 kuberuntime_manager.go:980] getPodContainerStatuses for pod "storage-provisioner_kube-system(5750970b-b6e6-4283-839d-d9eaddeb5c46)" failed: rpc error: code = Unknown desc = Error: No such container: af28c036766178fda8a8c02586fbc343f12e3fea03619d61d1a225860811ea29
	* Mar 10 21:08:24 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:08:24.335438    3401 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 21:08:24 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:08:24.431616    3401 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 21:09:38 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:09:38.743940    3401 trace.go:205] Trace[556330890]: "iptables Monitor CANARY check" (10-Mar-2021 21:09:36.689) (total time: 2054ms):
	* Mar 10 21:09:38 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[556330890]: [2.0541257s] [2.0541257s] END
	* Mar 10 21:12:42 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:12:42.448955    3401 trace.go:205] Trace[332987838]: "iptables Monitor CANARY check" (10-Mar-2021 21:12:35.947) (total time: 6501ms):
	* Mar 10 21:12:42 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[332987838]: [6.5014047s] [6.5014047s] END
	* Mar 10 21:13:16 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:13:16.089717    3401 container.go:448] Failed to get RecentStats("/kubepods/besteffort/pod5750970b-b6e6-4283-839d-d9eaddeb5c46/af28c036766178fda8a8c02586fbc343f12e3fea03619d61d1a225860811ea29") while determining the next housekeeping: unable to find data in memory cache
	* Mar 10 21:13:23 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:13:23.218144    3401 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod5750970b-b6e6-4283-839d-d9eaddeb5c46/af28c036766178fda8a8c02586fbc343f12e3fea03619d61d1a225860811ea29": RecentStats: unable to find data in memory cache]
	* Mar 10 21:13:27 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:13:27.366301    3401 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 21:13:28 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:13:28.147232    3401 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 21:13:31 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:13:31.186405    3401 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8444/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-different-port-20210310205202-6496?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
	* Mar 10 21:13:38 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:13:38.586695    3401 trace.go:205] Trace[1786516523]: "iptables Monitor CANARY check" (10-Mar-2021 21:13:34.393) (total time: 4192ms):
	* Mar 10 21:13:38 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[1786516523]: [4.1923744s] [4.1923744s] END
	* Mar 10 21:14:39 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:14:39.775535    3401 trace.go:205] Trace[1742634719]: "iptables Monitor CANARY check" (10-Mar-2021 21:14:34.451) (total time: 5324ms):
	* Mar 10 21:14:39 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[1742634719]: [5.3242296s] [5.3242296s] END
	* Mar 10 21:15:15 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:15:15.432317    3401 scope.go:95] [topologymanager] RemoveContainer - Container ID: af28c036766178fda8a8c02586fbc343f12e3fea03619d61d1a225860811ea29
	* Mar 10 21:15:44 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:15:44.609824    3401 trace.go:205] Trace[994155534]: "iptables Monitor CANARY check" (10-Mar-2021 21:15:38.102) (total time: 6505ms):
	* Mar 10 21:15:44 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[994155534]: [6.5059854s] [6.5059854s] END
	* 
	* ==> storage-provisioner [af28c0367661] <==
	* I0310 21:08:06.853447       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 21:08:07.338298       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 21:08:07.338452       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 21:08:07.922050       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 21:08:08.042910       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210310205202-6496_9d3d4123-3487-44c9-b1ce-132c3a93295b!
	* I0310 21:08:08.043048       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c108f6b-3b35-40a4-8699-f723e5b7fdae", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210310205202-6496_9d3d4123-3487-44c9-b1ce-132c3a93295b became leader
	* I0310 21:08:08.448124       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210310205202-6496_9d3d4123-3487-44c9-b1ce-132c3a93295b!
	* I0310 21:12:39.870678       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c108f6b-3b35-40a4-8699-f723e5b7fdae", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210310205202-6496_9d3d4123-3487-44c9-b1ce-132c3a93295b stopped leading
	* I0310 21:12:40.204731       1 leaderelection.go:288] failed to renew lease kube-system/k8s.io-minikube-hostpath: failed to tryAcquireOrRenew context deadline exceeded
	* F0310 21:12:40.214679       1 controller.go:877] leaderelection lost
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                   Args                    |                  Profile                  |          User           | Version |          Start Time           |           End Time            |
	|---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                        | offline-docker-20210310201637-6496        | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:34:20 GMT | Wed, 10 Mar 2021 20:34:47 GMT |
	|         | offline-docker-20210310201637-6496        |                                           |                         |         |                               |                               |
	| stop    | -p                                        | kubernetes-upgrade-20210310201637-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:39:52 GMT | Wed, 10 Mar 2021 20:40:10 GMT |
	|         | kubernetes-upgrade-20210310201637-6496    |                                           |                         |         |                               |                               |
	| start   | -p nospam-20210310201637-6496             | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:16:38 GMT | Wed, 10 Mar 2021 20:40:39 GMT |
	|         | -n=1 --memory=2250                        |                                           |                         |         |                               |                               |
	|         | --wait=false --driver=docker              |                                           |                         |         |                               |                               |
	| -p      | nospam-20210310201637-6496                | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:41:42 GMT | Wed, 10 Mar 2021 20:44:25 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p nospam-20210310201637-6496             | nospam-20210310201637-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:44:37 GMT | Wed, 10 Mar 2021 20:44:59 GMT |
	| -p      | docker-flags-20210310201637-6496          | docker-flags-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:47:18 GMT | Wed, 10 Mar 2021 20:49:03 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p                                        | docker-flags-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:21 GMT | Wed, 10 Mar 2021 20:49:47 GMT |
	|         | docker-flags-20210310201637-6496          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | force-systemd-env-20210310201637-6496     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:41 GMT | Wed, 10 Mar 2021 20:50:17 GMT |
	|         | force-systemd-env-20210310201637-6496     |                                           |                         |         |                               |                               |
	| -p      | cert-options-20210310203249-6496          | cert-options-20210310203249-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:50:36 GMT | Wed, 10 Mar 2021 20:50:43 GMT |
	|         | ssh openssl x509 -text -noout -in         |                                           |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt     |                                           |                         |         |                               |                               |
	| delete  | -p                                        | cert-options-20210310203249-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:10 GMT | Wed, 10 Mar 2021 20:51:56 GMT |
	|         | cert-options-20210310203249-6496          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | disable-driver-mounts-20210310205156-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:57 GMT | Wed, 10 Mar 2021 20:52:02 GMT |
	|         | disable-driver-mounts-20210310205156-6496 |                                           |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210310203447-6496    | force-systemd-flag-20210310203447-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:53:03 GMT | Wed, 10 Mar 2021 20:53:44 GMT |
	|         | ssh docker info --format                  |                                           |                         |         |                               |                               |
	|         |                          |                                           |                         |         |                               |                               |
	| delete  | -p                                        | force-systemd-flag-20210310203447-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:54:07 GMT | Wed, 10 Mar 2021 20:54:36 GMT |
	|         | force-systemd-flag-20210310203447-6496    |                                           |                         |         |                               |                               |
	| stop    | -p                                        | old-k8s-version-20210310204459-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:19 GMT | Wed, 10 Mar 2021 21:02:40 GMT |
	|         | old-k8s-version-20210310204459-6496       |                                           |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                    |                                           |                         |         |                               |                               |
	| addons  | enable dashboard -p                       | old-k8s-version-20210310204459-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:42 GMT | Wed, 10 Mar 2021 21:02:42 GMT |
	|         | old-k8s-version-20210310204459-6496       |                                           |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496           | embed-certs-20210310205017-6496           | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:07:05 GMT | Wed, 10 Mar 2021 21:08:33 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| start   | -p                                        | stopped-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:52:21 GMT | Wed, 10 Mar 2021 21:09:23 GMT |
	|         | stopped-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr           |                                           |                         |         |                               |                               |
	|         | -v=1 --driver=docker                      |                                           |                         |         |                               |                               |
	| logs    | -p                                        | stopped-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:09:23 GMT | Wed, 10 Mar 2021 21:10:51 GMT |
	|         | stopped-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	| delete  | -p                                        | stopped-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:10:52 GMT | Wed, 10 Mar 2021 21:11:13 GMT |
	|         | stopped-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	| delete  | -p                                        | running-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:45 GMT | Wed, 10 Mar 2021 21:12:11 GMT |
	|         | running-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	| stop    | -p                                        | embed-certs-20210310205017-6496           | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:12:38 GMT |
	|         | embed-certs-20210310205017-6496           |                                           |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                    |                                           |                         |         |                               |                               |
	| addons  | enable dashboard -p                       | embed-certs-20210310205017-6496           | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:40 GMT | Wed, 10 Mar 2021 21:12:41 GMT |
	|         | embed-certs-20210310205017-6496           |                                           |                         |         |                               |                               |
	| -p      | kubernetes-upgrade-20210310201637-6496    | kubernetes-upgrade-20210310201637-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:50 GMT | Wed, 10 Mar 2021 21:15:02 GMT |
	|         | logs -n 25                                |                                           |                         |         |                               |                               |
	| delete  | -p                                        | kubernetes-upgrade-20210310201637-6496    | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:15 GMT | Wed, 10 Mar 2021 21:15:46 GMT |
	|         | kubernetes-upgrade-20210310201637-6496    |                                           |                         |         |                               |                               |
	| delete  | -p                                        | missing-upgrade-20210310201637-6496       | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:38 GMT | Wed, 10 Mar 2021 21:16:03 GMT |
	|         | missing-upgrade-20210310201637-6496       |                                           |                         |         |                               |                               |
	|---------|-------------------------------------------|-------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 21:16:04
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 21:16:04.094174   16712 out.go:239] Setting OutFile to fd 3016 ...
	* I0310 21:16:04.095160   16712 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:16:04.096166   16712 out.go:252] Setting ErrFile to fd 2412...
	* I0310 21:16:04.096166   16712 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:16:04.121385   16712 out.go:246] Setting JSON to false
	* I0310 21:16:04.130396   16712 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36430,"bootTime":1615374534,"procs":120,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 21:16:04.130396   16712 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 21:16:04.137430   16712 out.go:129] * [calico-20210310211603-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 21:16:04.140390   16712 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 21:16:04.144396   16712 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 21:16:04.747669   16712 docker.go:119] docker version: linux-20.10.2
	* I0310 21:16:04.749368   16712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:16:05.798294   16712 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0489272s)
	* I0310 21:16:05.799370   16712 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:7 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:93 SystemTime:2021-03-10 21:16:05.3171545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:16:06.277847    7648 cli_runner.go:168] Completed: docker run --rm --name cilium-20210310211546-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210310211546-6496 --entrypoint /usr/bin/test -v cilium-20210310211546-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (6.1650083s)
	* I0310 21:16:06.277847    7648 oci.go:106] Successfully prepared a docker volume cilium-20210310211546-6496
	* I0310 21:16:06.278103    7648 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:16:06.278713    7648 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:16:06.278713    7648 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 21:16:06.292986    7648 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20210310211546-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* I0310 21:16:02.059255   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	* I0310 21:16:02.495768   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 21:16:03.129487   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	* I0310 21:16:03.665827   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 21:16:04.116710   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 21:16:04.453956   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 21:16:04.904573   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 21:16:05.213723   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 21:16:05.558180   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 21:16:05.919639   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 21:16:06.185722   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 21:16:06.509047   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 21:16:05.803731   16712 out.go:129] * Using the docker driver based on user configuration
	* I0310 21:16:05.803986   16712 start.go:276] selected driver: docker
	* I0310 21:16:05.803986   16712 start.go:718] validating driver "docker" against <nil>
	* I0310 21:16:05.803986   16712 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 21:16:06.904878   16712 out.go:129] 
	* W0310 21:16:06.905436   16712 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	* W0310 21:16:06.914424   16712 out.go:191] * Suggestion: 
	* 
	*     1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	*     2. Click "Settings"
	*     3. Click "Resources"
	*     4. Increase "Memory" slider bar to 2.25 GB or higher
	*     5. Click "Apply & Restart"
	* W0310 21:16:06.915168   16712 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* I0310 21:16:06.576487   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:16:06.594836   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:16:06.917963   16712 out.go:129] 
	* I0310 21:16:06.931988   16712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:16:08.067048   16712 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1347721s)
	* I0310 21:16:08.067330   16712 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:7 ContainersRunning:7 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:70 SystemTime:2021-03-10 21:16:07.5796165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:16:08.068488   16712 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 21:16:08.069159   16712 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 21:16:08.069394   16712 cni.go:74] Creating CNI manager for "calico"
	* I0310 21:16:08.069394   16712 start_flags.go:393] Found "Calico" CNI - setting NetworkPlugin=cni
	* I0310 21:16:08.069631   16712 start_flags.go:398] config:
	* {Name:calico-20210310211603-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:calico-20210310211603-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:16:08.083543   16712 out.go:129] * Starting control plane node calico-20210310211603-6496 in cluster calico-20210310211603-6496
	* I0310 21:16:08.812264   16712 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 21:16:08.812452   16712 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 21:16:08.812969   16712 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:16:08.813426   16712 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:16:08.813426   16712 cache.go:54] Caching tarball of preloaded images
	* I0310 21:16:08.813733   16712 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 21:16:08.813733   16712 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 21:16:08.814441   16712 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\config.json ...
	* I0310 21:16:08.821390   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\config.json: {Name:mk1c8688c88a19465c9b0008d3a56d112c3e6ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:16:08.852886   16712 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 21:16:08.852886   16712 start.go:313] acquiring machines lock for calico-20210310211603-6496: {Name:mk2346628300a1712deed80d8b7784c1fe0ad049 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:08.852886   16712 start.go:317] acquired machines lock for "calico-20210310211603-6496" in 0s
	* I0310 21:16:08.853943   16712 start.go:89] Provisioning new machine with config: &{Name:calico-20210310211603-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:calico-20210310211603-6496 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 21:16:08.853943   16712 start.go:126] createHost starting for "" (driver="docker")
	* I0310 21:16:04.034117   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 21:16:04.160652   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 21:16:04.197292   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 21:16:04.208963   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 21:16:04.322723   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:04.495787   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 21:16:04.597070   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 21:16:04.638624   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 21:16:04.648464   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 21:16:04.709121   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:04.807822   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 21:16:04.960553   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 21:16:05.052887   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 21:16:05.069353   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 21:16:05.117631   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:05.257032   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 21:16:05.357085   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 21:16:05.388289   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 21:16:05.396377   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 21:16:05.495980   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:05.580342   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 21:16:05.694223   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 21:16:05.829208   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 21:16:05.842568   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 21:16:05.912387   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:06.009082   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 21:16:06.231222   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 21:16:06.279935   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 21:16:06.295080   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 21:16:06.416706   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:06.509256   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 21:16:06.728026   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 21:16:06.794469   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 21:16:06.805268   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 21:16:06.957950   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:07.133024   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 21:16:07.336315   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 21:16:07.390010   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 21:16:07.413693   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 21:16:07.504557   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:07.606289   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 21:16:07.805772   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 21:16:07.859263   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 21:16:07.887804   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 21:16:07.990591   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:08.165452   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 21:16:08.428919   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 21:16:08.463967   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 21:16:08.485728   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 21:16:08.615919   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:08.748057   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 21:16:08.863881   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 21:16:08.909622   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 21:16:08.933499   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 21:16:06.300744    7648 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* W0310 21:16:07.044719    7648 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20210310211546-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	* I0310 21:16:07.045656    7648 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20210310211546-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	* stdout:
	* 
	* stderr:
	* docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	* 
	* The notification platform is unavailable.
	* 	���
	* 
	* ���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	*    at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	* �������?8
	* CreateToastNotifier
	* Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	* Windows.UI.Notifications.ToastNotificationManager
	* Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	* ���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	* ���+The notification platform is unavailable.
	* 	������������RestrictedErrorReference
	* 	
���
���������RestrictedCapabilitySid
	* 	������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	* See 'docker run --help'.
	* I0310 21:16:07.420839    7648 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1182531s)
	* I0310 21:16:07.422136    7648 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:7 ContainersRunning:7 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:70 SystemTime:2021-03-10 21:16:06.8952943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:16:07.432724    7648 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 21:16:08.605487    7648 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1718812s)
	* I0310 21:16:08.618106    7648 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20210310211546-6496 --name cilium-20210310211546-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210310211546-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20210310211546-6496 --volume cilium-20210310211546-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 21:16:07.423737   18444 retry.go:31] will retry after 14.635568968s: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:16:06.933993   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 21:16:07.390010   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 21:16:07.711811   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 21:16:08.175162   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 21:16:08.601497   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 21:16:09.316059   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 21:16:09.728086   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 21:16:10.141482   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 21:16:10.563634   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 21:16:10.897058   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 21:16:11.496042   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 21:16:11.804386   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 21:16:08.863881   16712 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	* I0310 21:16:08.864874   16712 start.go:160] libmachine.API.Create for "calico-20210310211603-6496" (driver="docker")
	* I0310 21:16:08.865899   16712 client.go:168] LocalClient.Create starting
	* I0310 21:16:08.865899   16712 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	* I0310 21:16:08.865899   16712 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:16:08.865899   16712 main.go:121] libmachine: Parsing certificate...
	* I0310 21:16:08.866892   16712 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	* I0310 21:16:08.866892   16712 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:16:08.866892   16712 main.go:121] libmachine: Parsing certificate...
	* I0310 21:16:08.909622   16712 cli_runner.go:115] Run: docker network inspect calico-20210310211603-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 21:16:09.559483   16712 cli_runner.go:162] docker network inspect calico-20210310211603-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 21:16:09.570769   16712 network_create.go:240] running [docker network inspect calico-20210310211603-6496] to gather additional debugging logs...
	* I0310 21:16:09.570769   16712 cli_runner.go:115] Run: docker network inspect calico-20210310211603-6496
	* W0310 21:16:10.240710   16712 cli_runner.go:162] docker network inspect calico-20210310211603-6496 returned with exit code 1
	* I0310 21:16:10.241008   16712 network_create.go:243] error running [docker network inspect calico-20210310211603-6496]: docker network inspect calico-20210310211603-6496: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: calico-20210310211603-6496
	* I0310 21:16:10.241008   16712 network_create.go:245] output of [docker network inspect calico-20210310211603-6496]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: calico-20210310211603-6496
	* 
	* ** /stderr **
	* I0310 21:16:10.250189   16712 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 21:16:10.932483   16712 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 21:16:10.932483   16712 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: calico-20210310211603-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 21:16:10.939478   16712 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210310211603-6496
	* W0310 21:16:11.539763   16712 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210310211603-6496 returned with exit code 1
	* W0310 21:16:11.540218   16712 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	* I0310 21:16:11.557810   16712 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 21:16:12.257623   16712 cli_runner.go:115] Run: docker volume create calico-20210310211603-6496 --label name.minikube.sigs.k8s.io=calico-20210310211603-6496 --label created_by.minikube.sigs.k8s.io=true
	* I0310 21:16:12.879927   16712 oci.go:102] Successfully created a docker volume calico-20210310211603-6496
	* I0310 21:16:12.911970   16712 cli_runner.go:115] Run: docker run --rm --name calico-20210310211603-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210310211603-6496 --entrypoint /usr/bin/test -v calico-20210310211603-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 21:16:09.013439   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:09.259861   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 21:16:09.396061   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 21:16:09.478506   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 21:16:09.505179   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 21:16:09.701881   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:09.845588   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 21:16:10.029990   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 21:16:10.063892   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 21:16:10.068474   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 21:16:10.144828   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:10.246206   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 21:16:10.430189   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 21:16:10.529281   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 21:16:10.549459   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 21:16:10.786776   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:10.945484   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 21:16:11.189634   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 21:16:11.227628   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 21:16:11.238518   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 21:16:11.313046   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:11.407698   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 21:16:11.504195   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 21:16:11.576251   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 21:16:11.589657   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 21:16:11.676832   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:11.751309   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 21:16:11.841686   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 21:16:11.882570   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 21:16:11.899935   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 21:16:11.966683   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:12.138439   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 21:16:12.265436   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 21:16:12.315270   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 21:16:12.340339   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 21:16:12.387481   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:12.521006   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 21:16:12.636982   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 21:16:12.708270   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 21:16:12.719757   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 21:16:12.872120   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:12.992830   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 21:16:13.238543   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 21:16:13.271069   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 21:16:13.283324   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 21:16:13.428594   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:13.558468   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 21:16:13.713262   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 21:16:13.829900   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 21:16:13.851374   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 21:16:13.898903   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:15:16.547672   20736 out.go:340] unable to execute * 2021-03-10 21:14:21.517672 W | etcdserver: request "header:<ID:11303041234760730131 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:769 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905954321 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (577.3441ms) to execute
	: html/template:* 2021-03-10 21:14:21.517672 W | etcdserver: request "header:<ID:11303041234760730131 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:769 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905954321 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (577.3441ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:15:16.588823   20736 out.go:340] unable to execute * 2021-03-10 21:14:21.956599 W | etcdserver: request "header:<ID:11303041234760730133 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210310205202-6496\" mod_revision:753 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210310205202-6496\" value_size:7191 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210310205202-6496\" > >>" with result "size:16" took too long (369.6447ms) to execute
	: html/template:* 2021-03-10 21:14:21.956599 W | etcdserver: request "header:<ID:11303041234760730133 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210310205202-6496\" mod_revision:753 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210310205202-6496\" value_size:7191 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-different-port-20210310205202-6496\" > >>" with result "size:16" took too long (369.6447ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:15:16.617667   20736 out.go:340] unable to execute * 2021-03-10 21:14:41.793250 W | etcdserver: request "header:<ID:11303041234760730185 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:774 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905954373 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (350.1344ms) to execute
	: html/template:* 2021-03-10 21:14:41.793250 W | etcdserver: request "header:<ID:11303041234760730185 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:774 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905954373 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (350.1344ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:15:16.636022   20736 out.go:340] unable to execute * 2021-03-10 21:14:42.181051 W | etcdserver: request "header:<ID:11303041234760730187 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-g5v6m\" mod_revision:765 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-g5v6m\" value_size:1365 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-g5v6m\" > >>" with result "size:16" took too long (228.8131ms) to execute
	: html/template:* 2021-03-10 21:14:42.181051 W | etcdserver: request "header:<ID:11303041234760730187 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-g5v6m\" mod_revision:765 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-g5v6m\" value_size:1365 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-g5v6m\" > >>" with result "size:16" took too long (228.8131ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:15:16.645047   20736 out.go:340] unable to execute * 2021-03-10 21:14:42.347104 W | etcdserver: request "header:<ID:11303041234760730188 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" mod_revision:763 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" value_size:3642 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" > >>" with result "size:16" took too long (165.8344ms) to execute
	: html/template:* 2021-03-10 21:14:42.347104 W | etcdserver: request "header:<ID:11303041234760730188 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" mod_revision:763 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" value_size:3642 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-74ff55c5b\" > >>" with result "size:16" took too long (165.8344ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:15:16.664119   20736 out.go:340] unable to execute * 2021-03-10 21:14:55.862963 W | etcdserver: request "header:<ID:11303041234760730221 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:779 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905954411 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (255.9282ms) to execute
	: html/template:* 2021-03-10 21:14:55.862963 W | etcdserver: request "header:<ID:11303041234760730221 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:779 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905954411 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (255.9282ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:15:16.675121   20736 out.go:340] unable to execute * 2021-03-10 21:15:00.654271 W | etcdserver: request "header:<ID:11303041234760730243 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-default-k8s-different-port-20210310205202-6496.166b179132e4c88c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-default-k8s-different-port-20210310205202-6496.166b179132e4c88c\" value_size:883 lease:2079669197905954400 >> failure:<>>" with result "size:16" took too long (225.2519ms) to execute
	: html/template:* 2021-03-10 21:15:00.654271 W | etcdserver: request "header:<ID:11303041234760730243 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-scheduler-default-k8s-different-port-20210310205202-6496.166b179132e4c88c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-scheduler-default-k8s-different-port-20210310205202-6496.166b179132e4c88c\" value_size:883 lease:2079669197905954400 >> failure:<>>" with result "size:16" took too long (225.2519ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:16:13.975182   20736 out.go:335] unable to parse "* I0310 21:16:04.749368   16712 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:16:04.749368   16712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:16:13.988531   20736 out.go:335] unable to parse "* I0310 21:16:05.798294   16712 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0489272s)\n": template: * I0310 21:16:05.798294   16712 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0489272s)
	:1: function "json" not defined - returning raw string.
	E0310 21:16:14.178795   20736 out.go:335] unable to parse "* I0310 21:16:06.931988   16712 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:16:06.931988   16712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:16:14.192165   20736 out.go:335] unable to parse "* I0310 21:16:08.067048   16712 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.1347721s)\n": template: * I0310 21:16:08.067048   16712 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1347721s)
	:1: function "json" not defined - returning raw string.
	E0310 21:16:14.637860   20736 out.go:335] unable to parse "* I0310 21:16:06.300744    7648 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:16:06.300744    7648 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:16:14.873321   20736 out.go:335] unable to parse "* I0310 21:16:07.420839    7648 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.1182531s)\n": template: * I0310 21:16:07.420839    7648 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1182531s)
	:1: function "json" not defined - returning raw string.
	E0310 21:16:14.884444   20736 out.go:335] unable to parse "* I0310 21:16:07.432724    7648 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 21:16:07.432724    7648 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 21:16:14.896404   20736 out.go:335] unable to parse "* I0310 21:16:08.605487    7648 cli_runner.go:168] Completed: docker info --format \"'{{json .SecurityOptions}}'\": (1.1718812s)\n": template: * I0310 21:16:08.605487    7648 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1718812s)
	:1: function "json" not defined - returning raw string.
	E0310 21:16:14.992519   20736 out.go:340] unable to execute * I0310 21:16:08.909622   16712 cli_runner.go:115] Run: docker network inspect calico-20210310211603-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:16:08.909622   16712 cli_runner.go:115] Run: docker network inspect calico-20210310211603-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:282: executing "* I0310 21:16:08.909622   16712 cli_runner.go:115] Run: docker network inspect calico-20210310211603-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:16:15.002742   20736 out.go:340] unable to execute * W0310 21:16:09.559483   16712 cli_runner.go:162] docker network inspect calico-20210310211603-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 21:16:09.559483   16712 cli_runner.go:162] docker network inspect calico-20210310211603-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:277: executing "* W0310 21:16:09.559483   16712 cli_runner.go:162] docker network inspect calico-20210310211603-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:16:15.094030   20736 out.go:340] unable to execute * I0310 21:16:10.250189   16712 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:16:10.250189   16712 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 21:16:10.250189   16712 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496: (16.9611599s)
helpers_test.go:257: (dbg) Run:  kubectl --context default-k8s-different-port-20210310205202-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
helpers_test.go:257: (dbg) Done: kubectl --context default-k8s-different-port-20210310205202-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.6475917s)
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context default-k8s-different-port-20210310205202-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210310205202-6496 describe pod : exit status 1 (238.5925ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context default-k8s-different-port-20210310205202-6496 describe pod : exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (1473.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (1998.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20210310205436-6496 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.20.5-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-20210310205436-6496 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.20.5-rc.0: exit status 1 (30m0.0449353s)

                                                
                                                
-- stdout --
	* [newest-cni-20210310205436-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	* Starting control plane node newest-cni-20210310205436-6496 in cluster newest-cni-20210310205436-6496
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...
	  - kubelet.network-plugin=cni
	  - kubeadm.pod-network-cidr=192.168.111.111/16
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* Enabled addons: default-storageclass, storage-provisioner

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 20:54:37.409378   18752 out.go:239] Setting OutFile to fd 1728 ...
	I0310 20:54:37.411010   18752 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:54:37.411010   18752 out.go:252] Setting ErrFile to fd 2844...
	I0310 20:54:37.411010   18752 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 20:54:37.422352   18752 out.go:246] Setting JSON to false
	I0310 20:54:37.428140   18752 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":35143,"bootTime":1615374534,"procs":119,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 20:54:37.428894   18752 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 20:54:37.432933   18752 out.go:129] * [newest-cni-20210310205436-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 20:54:37.436905   18752 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 20:54:37.454379   18752 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 20:54:38.078035   18752 docker.go:119] docker version: linux-20.10.2
	I0310 20:54:38.086759   18752 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:54:39.853789   18752 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.7667606s)
	I0310 20:54:39.860371   18752 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:104 OomKillDisable:true NGoroutines:88 SystemTime:2021-03-10 20:54:39.0915303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:54:39.864167   18752 out.go:129] * Using the docker driver based on user configuration
	I0310 20:54:39.864492   18752 start.go:276] selected driver: docker
	I0310 20:54:39.864492   18752 start.go:718] validating driver "docker" against <nil>
	I0310 20:54:39.864492   18752 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 20:54:42.000537   18752 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:54:43.025382   18752 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0248581s)
	I0310 20:54:43.025979   18752 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:103 OomKillDisable:true NGoroutines:87 SystemTime:2021-03-10 20:54:42.5515979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:54:43.026636   18752 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	W0310 20:54:43.026636   18752 out.go:191] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0310 20:54:43.027584   18752 start_flags.go:736] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0310 20:54:43.027781   18752 cni.go:74] Creating CNI manager for ""
	I0310 20:54:43.027781   18752 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:54:43.027781   18752 start_flags.go:398] config:
	{Name:newest-cni-20210310205436-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:newest-cni-20210310205436-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:54:43.032374   18752 out.go:129] * Starting control plane node newest-cni-20210310205436-6496 in cluster newest-cni-20210310205436-6496
	I0310 20:54:43.691951   18752 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 20:54:43.692169   18752 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 20:54:43.692169   18752 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 20:54:43.692169   18752 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4
	I0310 20:54:43.692691   18752 cache.go:54] Caching tarball of preloaded images
	I0310 20:54:43.693541   18752 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 20:54:43.693989   18752 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.5-rc.0 on docker
	I0310 20:54:43.696562   18752 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\config.json ...
	I0310 20:54:43.697296   18752 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\config.json: {Name:mk214e3b51f9f4db15aedc6f0344e668126563ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:54:43.716511   18752 cache.go:185] Successfully downloaded all kic artifacts
	I0310 20:54:43.717411   18752 start.go:313] acquiring machines lock for newest-cni-20210310205436-6496: {Name:mk05af11c35c7bfab2e343332dc8c42b3e4c327f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 20:54:43.717411   18752 start.go:317] acquired machines lock for "newest-cni-20210310205436-6496" in 0s
	I0310 20:54:43.717411   18752 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210310205436-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:newest-cni-20210310205436-6496 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}
	I0310 20:54:43.717411   18752 start.go:126] createHost starting for "" (driver="docker")
	I0310 20:54:44.099357   18752 out.go:150] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0310 20:54:44.100577   18752 start.go:160] libmachine.API.Create for "newest-cni-20210310205436-6496" (driver="docker")
	I0310 20:54:44.100928   18752 client.go:168] LocalClient.Create starting
	I0310 20:54:44.101586   18752 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 20:54:44.101586   18752 main.go:121] libmachine: Decoding PEM data...
	I0310 20:54:44.101961   18752 main.go:121] libmachine: Parsing certificate...
	I0310 20:54:44.102526   18752 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 20:54:44.102526   18752 main.go:121] libmachine: Decoding PEM data...
	I0310 20:54:44.102881   18752 main.go:121] libmachine: Parsing certificate...
	I0310 20:54:44.131637   18752 cli_runner.go:115] Run: docker network inspect newest-cni-20210310205436-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 20:54:44.862437   18752 cli_runner.go:162] docker network inspect newest-cni-20210310205436-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 20:54:44.873499   18752 network_create.go:240] running [docker network inspect newest-cni-20210310205436-6496] to gather additional debugging logs...
	I0310 20:54:44.873499   18752 cli_runner.go:115] Run: docker network inspect newest-cni-20210310205436-6496
	W0310 20:54:45.496754   18752 cli_runner.go:162] docker network inspect newest-cni-20210310205436-6496 returned with exit code 1
	I0310 20:54:45.496754   18752 network_create.go:243] error running [docker network inspect newest-cni-20210310205436-6496]: docker network inspect newest-cni-20210310205436-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20210310205436-6496
	I0310 20:54:45.496754   18752 network_create.go:245] output of [docker network inspect newest-cni-20210310205436-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20210310205436-6496
	
	** /stderr **
	I0310 20:54:45.505797   18752 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 20:54:46.155120   18752 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 20:54:46.156271   18752 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: newest-cni-20210310205436-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 20:54:46.173173   18752 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20210310205436-6496
	W0310 20:54:46.776962   18752 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20210310205436-6496 returned with exit code 1
	W0310 20:54:46.777472   18752 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 20:54:46.796093   18752 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 20:54:47.463657   18752 cli_runner.go:115] Run: docker volume create newest-cni-20210310205436-6496 --label name.minikube.sigs.k8s.io=newest-cni-20210310205436-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 20:54:48.126891   18752 oci.go:102] Successfully created a docker volume newest-cni-20210310205436-6496
	I0310 20:54:48.136057   18752 cli_runner.go:115] Run: docker run --rm --name newest-cni-20210310205436-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210310205436-6496 --entrypoint /usr/bin/test -v newest-cni-20210310205436-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 20:54:57.048979   18752 cli_runner.go:168] Completed: docker run --rm --name newest-cni-20210310205436-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210310205436-6496 --entrypoint /usr/bin/test -v newest-cni-20210310205436-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (8.9130276s)
	I0310 20:54:57.050085   18752 oci.go:106] Successfully prepared a docker volume newest-cni-20210310205436-6496
	I0310 20:54:57.050358   18752 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 20:54:57.050968   18752 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4
	I0310 20:54:57.051185   18752 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 20:54:57.069064   18752 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 20:54:57.073461   18752 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20210310205436-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	W0310 20:54:57.898248   18752 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20210310205436-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 20:54:57.899111   18752 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20210310205436-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 20:54:58.192960   18752 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1236168s)
	I0310 20:54:58.194046   18752 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:104 OomKillDisable:true NGoroutines:87 SystemTime:2021-03-10 20:54:57.6721698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 20:54:58.212102   18752 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 20:54:59.335478   18752 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.12273s)
	I0310 20:54:59.343606   18752 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20210310205436-6496 --name newest-cni-20210310205436-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210310205436-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20210310205436-6496 --volume newest-cni-20210310205436-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 20:55:04.355389   18752 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20210310205436-6496 --name newest-cni-20210310205436-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20210310205436-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20210310205436-6496 --volume newest-cni-20210310205436-6496:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (5.0109672s)
	I0310 20:55:04.369529   18752 cli_runner.go:115] Run: docker container inspect newest-cni-20210310205436-6496 --format={{.State.Running}}
	I0310 20:55:05.012640   18752 cli_runner.go:115] Run: docker container inspect newest-cni-20210310205436-6496 --format={{.State.Status}}
	I0310 20:55:05.674325   18752 cli_runner.go:115] Run: docker exec newest-cni-20210310205436-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 20:55:06.837817   18752 cli_runner.go:168] Completed: docker exec newest-cni-20210310205436-6496 stat /var/lib/dpkg/alternatives/iptables: (1.1635048s)
	I0310 20:55:06.838213   18752 oci.go:278] the created container "newest-cni-20210310205436-6496" has a running status.
	I0310 20:55:06.838213   18752 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa...
	I0310 20:55:07.155594   18752 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 20:55:09.493349   18752 cli_runner.go:115] Run: docker container inspect newest-cni-20210310205436-6496 --format={{.State.Status}}
	I0310 20:55:10.156355   18752 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 20:55:10.156355   18752 kic_runner.go:115] Args: [docker exec --privileged newest-cni-20210310205436-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 20:55:11.773562   18752 kic_runner.go:124] Done: [docker exec --privileged newest-cni-20210310205436-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.6172245s)
	I0310 20:55:11.780066   18752 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa...
	I0310 20:55:12.654471   18752 cli_runner.go:115] Run: docker container inspect newest-cni-20210310205436-6496 --format={{.State.Status}}
	I0310 20:55:13.249376   18752 machine.go:88] provisioning docker machine ...
	I0310 20:55:13.254923   18752 ubuntu.go:169] provisioning hostname "newest-cni-20210310205436-6496"
	I0310 20:55:13.262836   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:55:13.879573   18752 main.go:121] libmachine: Using SSH client type: native
	I0310 20:55:13.892735   18752 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55161 <nil> <nil>}
	I0310 20:55:13.892735   18752 main.go:121] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210310205436-6496 && echo "newest-cni-20210310205436-6496" | sudo tee /etc/hostname
	I0310 20:55:13.901638   18752 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 20:55:18.517386   18752 main.go:121] libmachine: SSH cmd err, output: <nil>: newest-cni-20210310205436-6496
	
	I0310 20:55:18.525141   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:55:19.131649   18752 main.go:121] libmachine: Using SSH client type: native
	I0310 20:55:19.132247   18752 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55161 <nil> <nil>}
	I0310 20:55:19.132247   18752 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210310205436-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210310205436-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210310205436-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 20:55:21.088497   18752 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 20:55:21.089022   18752 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 20:55:21.089022   18752 ubuntu.go:177] setting up certificates
	I0310 20:55:21.089022   18752 provision.go:83] configureAuth start
	I0310 20:55:21.098852   18752 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210310205436-6496
	I0310 20:55:21.723754   18752 provision.go:137] copyHostCerts
	I0310 20:55:21.724831   18752 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 20:55:21.725040   18752 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 20:55:21.725199   18752 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 20:55:21.730474   18752 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 20:55:21.730474   18752 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 20:55:21.732501   18752 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 20:55:21.736550   18752 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 20:55:21.736550   18752 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 20:55:21.737604   18752 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 20:55:21.740467   18752 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.newest-cni-20210310205436-6496 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20210310205436-6496]
	I0310 20:55:22.032708   18752 provision.go:165] copyRemoteCerts
	I0310 20:55:22.059519   18752 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 20:55:22.067529   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:55:22.732777   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 20:55:23.494989   18752 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4354848s)
	I0310 20:55:23.495679   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0310 20:55:24.183750   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0310 20:55:24.746738   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 20:55:25.495995   18752 provision.go:86] duration metric: configureAuth took 4.4070197s
	I0310 20:55:25.495995   18752 ubuntu.go:193] setting minikube options for container-runtime
	I0310 20:55:25.518711   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:55:26.204994   18752 main.go:121] libmachine: Using SSH client type: native
	I0310 20:55:26.205351   18752 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55161 <nil> <nil>}
	I0310 20:55:26.205867   18752 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 20:55:27.736621   18752 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 20:55:27.736621   18752 ubuntu.go:71] root file system type: overlay
	I0310 20:55:27.736621   18752 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 20:55:27.744995   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:55:28.389830   18752 main.go:121] libmachine: Using SSH client type: native
	I0310 20:55:28.390837   18752 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55161 <nil> <nil>}
	I0310 20:55:28.390837   18752 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 20:55:30.178240   18752 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 20:55:30.195584   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:55:30.864031   18752 main.go:121] libmachine: Using SSH client type: native
	I0310 20:55:30.864370   18752 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55161 <nil> <nil>}
	I0310 20:55:30.864654   18752 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 20:55:49.876206   18752 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 20:55:30.159072000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 20:55:49.876595   18752 machine.go:91] provisioned docker machine in 36.6274216s
	I0310 20:55:49.876595   18752 client.go:171] LocalClient.Create took 1m5.7763826s
	I0310 20:55:49.876595   18752 start.go:168] duration metric: libmachine.API.Create for "newest-cni-20210310205436-6496" took 1m5.7767341s
	I0310 20:55:49.876928   18752 start.go:267] post-start starting for "newest-cni-20210310205436-6496" (driver="docker")
	I0310 20:55:49.876928   18752 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 20:55:49.890988   18752 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 20:55:49.898044   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:55:50.522346   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 20:55:51.071328   18752 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.1803517s)
	I0310 20:55:51.092989   18752 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 20:55:51.159314   18752 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 20:55:51.159659   18752 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 20:55:51.159936   18752 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 20:55:51.159936   18752 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 20:55:51.160354   18752 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 20:55:51.161931   18752 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 20:55:51.170549   18752 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 20:55:51.171718   18752 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 20:55:51.184151   18752 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 20:55:51.265250   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 20:55:51.516171   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 20:55:51.708707   18752 start.go:270] post-start completed in 1.8317972s
	I0310 20:55:51.753392   18752 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210310205436-6496
	I0310 20:55:52.411025   18752 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\config.json ...
	I0310 20:55:52.436360   18752 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 20:55:52.458038   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:55:53.085660   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 20:55:53.525023   18752 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.0886733s)
	I0310 20:55:53.525425   18752 start.go:129] duration metric: createHost completed in 1m9.8087696s
	I0310 20:55:53.525425   18752 start.go:80] releasing machines lock for "newest-cni-20210310205436-6496", held for 1m9.8087696s
	I0310 20:55:53.538899   18752 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20210310205436-6496
	I0310 20:55:54.178226   18752 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 20:55:54.184603   18752 ssh_runner.go:149] Run: systemctl --version
	I0310 20:55:54.188495   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:55:54.194500   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:55:54.854737   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 20:55:54.872427   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 20:55:55.477131   18752 ssh_runner.go:189] Completed: systemctl --version: (1.2925408s)
	I0310 20:55:55.478130   18752 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.2999168s)
	I0310 20:55:55.490692   18752 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 20:55:55.590615   18752 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:55:55.698527   18752 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 20:55:55.722522   18752 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 20:55:55.859965   18752 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 20:55:56.106763   18752 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 20:55:56.250433   18752 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:55:57.506743   18752 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.2563212s)
	I0310 20:55:57.518227   18752 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 20:55:57.690022   18752 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 20:55:58.393686   18752 out.go:150] * Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...
	I0310 20:55:58.404202   18752 cli_runner.go:115] Run: docker exec -t newest-cni-20210310205436-6496 dig +short host.docker.internal
	I0310 20:55:59.490154   18752 cli_runner.go:168] Completed: docker exec -t newest-cni-20210310205436-6496 dig +short host.docker.internal: (1.0859631s)
	I0310 20:55:59.490298   18752 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 20:55:59.511994   18752 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 20:55:59.545148   18752 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:55:59.724069   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 20:56:00.377904   18752 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\client.crt
	I0310 20:56:00.417738   18752 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\client.key
	I0310 20:56:00.449270   18752 out.go:129]   - kubelet.network-plugin=cni
	I0310 20:56:00.452411   18752 out.go:129]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0310 20:56:00.453427   18752 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 20:56:00.453832   18752 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4
	I0310 20:56:00.464412   18752 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:56:01.303903   18752 docker.go:423] Got preloaded images: 
	I0310 20:56:01.303903   18752 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.5-rc.0 wasn't preloaded
	I0310 20:56:01.324325   18752 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:56:01.503514   18752 ssh_runner.go:149] Run: which lz4
	I0310 20:56:01.602214   18752 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 20:56:01.678514   18752 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 20:56:01.678832   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.5-rc.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515786445 bytes)
	I0310 20:57:08.942406   18752 docker.go:388] Took 67.361995 seconds to copy over tarball
	I0310 20:57:08.955687   18752 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 20:57:47.339560   18752 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (38.3840059s)
	I0310 20:57:47.339768   18752 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 20:57:48.997618   18752 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 20:57:49.113930   18752 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3145 bytes)
	I0310 20:57:49.349461   18752 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 20:57:50.435560   18752 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0861056s)
	I0310 20:57:50.448877   18752 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 20:57:55.641096   18752 ssh_runner.go:189] Completed: sudo systemctl restart docker: (5.1913393s)
	I0310 20:57:55.653991   18752 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 20:57:56.403448   18752 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 20:57:56.404405   18752 cache_images.go:73] Images are preloaded, skipping loading
	I0310 20:57:56.409564   18752 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 20:57:57.906715   18752 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (1.4971606s)
	I0310 20:57:57.906715   18752 cni.go:74] Creating CNI manager for ""
	I0310 20:57:57.906715   18752 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 20:57:57.907173   18752 kubeadm.go:84] Using pod CIDR: 192.168.111.111/16
	I0310 20:57:57.907173   18752 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:172.17.0.4 APIServerPort:8443 KubernetesVersion:v1.20.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210310205436-6496 NodeName:newest-cni-20210310205436-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false
] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 20:57:57.907574   18752 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20210310205436-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.5-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 20:57:57.908339   18752 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210310205436-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=172.17.0.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.5-rc.0 ClusterName:newest-cni-20210310205436-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 20:57:57.916966   18752 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.5-rc.0
	I0310 20:57:57.962304   18752 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 20:57:57.968835   18752 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 20:57:58.096666   18752 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (417 bytes)
	I0310 20:57:58.299503   18752 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0310 20:57:58.691514   18752 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1997 bytes)
	I0310 20:57:58.866104   18752 ssh_runner.go:149] Run: grep 172.17.0.4	control-plane.minikube.internal$ /etc/hosts
	I0310 20:57:58.895818   18752 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 20:57:59.150896   18752 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496 for IP: 172.17.0.4
	I0310 20:57:59.151794   18752 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 20:57:59.152250   18752 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 20:57:59.153092   18752 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\client.key
	I0310 20:57:59.153479   18752 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.key.fb01c024
	I0310 20:57:59.153479   18752 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.crt.fb01c024 with IP's: [172.17.0.4 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 20:57:59.273716   18752 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.crt.fb01c024 ...
	I0310 20:57:59.273716   18752 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.crt.fb01c024: {Name:mkf1ce10a7abae38d6d2e5721a398e71610a4497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:57:59.292558   18752 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.key.fb01c024 ...
	I0310 20:57:59.292558   18752 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.key.fb01c024: {Name:mk75ad358ad428adb0930d2086afa3aa71d6dbc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:57:59.307164   18752 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.crt.fb01c024 -> C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.crt
	I0310 20:57:59.316543   18752 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.key.fb01c024 -> C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.key
	I0310 20:57:59.324565   18752 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\proxy-client.key
	I0310 20:57:59.324565   18752 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\proxy-client.crt with IP's: []
	I0310 20:57:59.696209   18752 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\proxy-client.crt ...
	I0310 20:57:59.696209   18752 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\proxy-client.crt: {Name:mk3d762c13e76102e2060f6c3af7726599857927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:57:59.718237   18752 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\proxy-client.key ...
	I0310 20:57:59.718237   18752 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\proxy-client.key: {Name:mk66a195689704b58b5cdeca641c3cca480d8028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 20:57:59.732186   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 20:57:59.732186   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.732186   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 20:57:59.732186   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.733180   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 20:57:59.733180   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.733180   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 20:57:59.733180   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.733180   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 20:57:59.734178   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.734178   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 20:57:59.734178   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.734178   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 20:57:59.734178   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.735175   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 20:57:59.735175   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.735175   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 20:57:59.735175   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.735175   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 20:57:59.736177   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.736177   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 20:57:59.736177   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.736177   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 20:57:59.737182   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.737182   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 20:57:59.737182   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.737182   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 20:57:59.737182   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.738174   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 20:57:59.738174   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.738174   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 20:57:59.738174   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.738174   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 20:57:59.739181   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.739181   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 20:57:59.739181   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.739181   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 20:57:59.740181   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.740181   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 20:57:59.740181   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.740181   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 20:57:59.740181   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.741174   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 20:57:59.741174   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.741174   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 20:57:59.741174   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.741174   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 20:57:59.742180   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.742180   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 20:57:59.742180   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.742180   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 20:57:59.742180   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.743182   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 20:57:59.743182   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.743182   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 20:57:59.743182   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.743182   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 20:57:59.744182   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.744182   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 20:57:59.744182   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.744182   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 20:57:59.744182   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.745182   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 20:57:59.745182   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.745182   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 20:57:59.745182   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.745182   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 20:57:59.746181   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.746181   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 20:57:59.746181   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.746181   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 20:57:59.746181   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.747234   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 20:57:59.747234   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.747234   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 20:57:59.747234   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.747234   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 20:57:59.748183   18752 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 20:57:59.748183   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 20:57:59.748183   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 20:57:59.748183   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 20:57:59.749181   18752 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 20:57:59.767801   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 20:57:59.948123   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 20:58:00.175907   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 20:58:00.340694   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\newest-cni-20210310205436-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 20:58:00.549283   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 20:58:00.690843   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 20:58:00.983947   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 20:58:01.213901   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 20:58:01.432113   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 20:58:01.611787   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 20:58:01.862031   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 20:58:02.455380   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 20:58:03.267564   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 20:58:04.217532   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 20:58:04.999899   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 20:58:05.314673   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 20:58:05.709980   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 20:58:05.962751   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 20:58:06.207538   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 20:58:06.456963   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 20:58:06.703055   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 20:58:06.904826   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 20:58:07.054901   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 20:58:07.388289   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 20:58:07.684924   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 20:58:07.996087   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 20:58:08.173334   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 20:58:08.446017   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 20:58:08.756907   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 20:58:09.098452   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 20:58:09.420857   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 20:58:09.652774   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 20:58:09.877020   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 20:58:10.380328   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 20:58:10.654136   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 20:58:10.857227   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 20:58:11.119488   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 20:58:11.356529   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 20:58:11.568349   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 20:58:11.774501   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 20:58:11.922571   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 20:58:12.289720   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 20:58:12.867406   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 20:58:13.538066   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 20:58:14.171598   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 20:58:14.676425   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 20:58:15.061349   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 20:58:15.437356   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 20:58:15.678352   18752 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 20:58:15.954143   18752 ssh_runner.go:149] Run: openssl version
	I0310 20:58:16.027535   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 20:58:16.100811   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 20:58:16.139336   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 20:58:16.150506   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 20:58:16.280114   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:16.423895   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 20:58:16.509168   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 20:58:16.540011   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 20:58:16.550258   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 20:58:16.598423   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:16.711544   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 20:58:16.781663   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 20:58:16.812453   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 20:58:16.823130   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 20:58:16.876598   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:16.932262   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 20:58:17.008628   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 20:58:17.059359   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 20:58:17.076776   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 20:58:17.146819   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:17.329746   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 20:58:17.443119   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 20:58:17.501198   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 20:58:17.519127   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 20:58:17.580421   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:17.671658   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 20:58:17.753518   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 20:58:17.829925   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 20:58:17.843206   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 20:58:17.900271   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:18.048486   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 20:58:18.194224   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 20:58:18.374094   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 20:58:18.384017   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 20:58:18.435728   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:18.525301   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 20:58:18.607194   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 20:58:18.660468   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 20:58:18.672885   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 20:58:18.760653   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:18.866731   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 20:58:19.009272   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 20:58:19.099643   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 20:58:19.113570   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 20:58:19.202077   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:19.302024   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 20:58:19.455161   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 20:58:19.515711   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 20:58:19.531734   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 20:58:19.618571   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:19.734757   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 20:58:19.836003   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 20:58:19.886970   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 20:58:19.897290   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 20:58:19.960955   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:20.042543   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 20:58:20.129813   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 20:58:20.180900   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 20:58:20.192669   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 20:58:20.244368   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:20.310130   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 20:58:20.416774   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 20:58:20.450892   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 20:58:20.469156   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 20:58:20.517501   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:20.577100   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 20:58:20.648023   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:58:20.673056   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:58:20.685424   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 20:58:20.797160   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 20:58:20.856689   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 20:58:20.945358   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 20:58:20.979827   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 20:58:20.990248   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 20:58:21.055462   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:21.140311   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 20:58:21.216304   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 20:58:21.250567   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 20:58:21.267710   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 20:58:21.350005   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:21.409757   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 20:58:21.538985   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 20:58:21.572124   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 20:58:21.586486   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 20:58:21.680349   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:21.754520   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 20:58:21.821477   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 20:58:21.877110   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 20:58:21.888231   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 20:58:21.960856   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:22.063290   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 20:58:22.134104   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 20:58:22.164006   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 20:58:22.175162   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 20:58:22.230824   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:22.318471   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 20:58:22.429070   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 20:58:22.474351   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 20:58:22.485227   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 20:58:22.548171   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:22.611638   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 20:58:22.703766   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 20:58:22.746073   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 20:58:22.759455   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 20:58:22.809528   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:22.889625   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 20:58:22.961849   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 20:58:22.987465   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 20:58:23.002792   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 20:58:23.051089   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:23.101051   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 20:58:23.182975   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 20:58:23.224377   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 20:58:23.234278   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 20:58:23.281531   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:23.341907   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 20:58:23.430336   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 20:58:23.468519   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 20:58:23.488461   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 20:58:23.546130   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:23.759057   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 20:58:23.862647   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 20:58:23.938421   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 20:58:23.949795   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 20:58:24.041732   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:24.152388   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 20:58:24.272573   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 20:58:24.347323   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 20:58:24.357699   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 20:58:24.424622   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:24.524972   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 20:58:24.699850   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 20:58:24.734180   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 20:58:24.752660   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 20:58:24.807301   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:24.866237   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 20:58:24.940972   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 20:58:24.995516   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 20:58:25.007698   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 20:58:25.250165   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:25.318754   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 20:58:25.400187   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 20:58:25.427485   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 20:58:25.447185   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 20:58:25.509550   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:25.592240   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 20:58:25.646840   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 20:58:25.682763   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 20:58:25.699118   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 20:58:25.760642   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:25.846573   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 20:58:25.924539   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 20:58:25.958850   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 20:58:25.968873   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 20:58:26.013942   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:26.084190   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 20:58:26.145563   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 20:58:26.166862   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 20:58:26.176915   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 20:58:26.230569   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:26.289358   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 20:58:26.375754   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 20:58:26.399861   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 20:58:26.410592   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 20:58:26.470562   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:26.564575   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 20:58:26.670798   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 20:58:26.721583   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 20:58:26.733487   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 20:58:26.867768   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:26.946683   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 20:58:27.029670   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 20:58:27.073815   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 20:58:27.095547   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 20:58:27.154881   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:27.216556   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 20:58:27.294925   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 20:58:27.330284   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 20:58:27.349490   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 20:58:27.442629   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:27.513762   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 20:58:27.607791   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 20:58:27.661357   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 20:58:27.673519   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 20:58:27.744545   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:27.812185   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 20:58:27.894281   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 20:58:27.924834   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 20:58:27.942064   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 20:58:27.998810   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:28.081988   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 20:58:28.168483   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 20:58:28.206189   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 20:58:28.217745   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 20:58:28.271114   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:28.342463   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 20:58:28.441653   18752 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 20:58:28.483692   18752 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 20:58:28.499645   18752 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 20:58:28.576661   18752 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 20:58:28.622272   18752 kubeadm.go:385] StartCluster: {Name:newest-cni-20210310205436-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:newest-cni-20210310205436-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.4 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 20:58:28.631148   18752 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 20:58:29.061129   18752 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 20:58:29.136266   18752 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 20:58:29.200996   18752 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 20:58:29.211316   18752 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 20:58:29.383299   18752 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 20:58:29.383623   18752 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:03:12.521509   18752 out.go:150]   - Generating certificates and keys ...
	I0310 21:03:12.526671   18752 out.go:150]   - Booting up control plane ...
	W0310 21:03:12.558343   18752 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-20210310205436-6496] and IPs [172.17.0.4 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-20210310205436-6496] and IPs [172.17.0.4 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.5-rc.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-20210310205436-6496] and IPs [172.17.0.4 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-20210310205436-6496] and IPs [172.17.0.4 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0310 21:03:12.559023   18752 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0310 21:05:44.634974   18752 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2m32.0763159s)
	I0310 21:05:44.654620   18752 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0310 21:05:45.943735   18752 ssh_runner.go:189] Completed: sudo systemctl stop -f kubelet: (1.288244s)
	I0310 21:05:45.949987   18752 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:05:48.792639   18752 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (2.8422942s)
	I0310 21:05:48.792639   18752 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:05:48.813156   18752 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:05:51.645949   18752 ssh_runner.go:189] Completed: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: (2.8327986s)
	I0310 21:05:51.646660   18752 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:05:51.646660   18752 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:10:41.801488   18752 out.go:150]   - Generating certificates and keys ...
	I0310 21:10:41.807624   18752 out.go:150]   - Booting up control plane ...
	I0310 21:10:41.813620   18752 out.go:150]   - Configuring RBAC rules ...
	I0310 21:10:41.817609   18752 cni.go:74] Creating CNI manager for ""
	I0310 21:10:41.817609   18752 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:10:41.817609   18752 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0310 21:10:41.829202   18752 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=newest-cni-20210310205436-6496 minikube.k8s.io/updated_at=2021_03_10T21_10_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:10:41.830088   18752 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:10:45.542527   18752 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (3.7249239s)
	I0310 21:10:45.542527   18752 ops.go:34] apiserver oom_adj: -16
	I0310 21:11:26.077018   18752 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=newest-cni-20210310205436-6496 minikube.k8s.io/updated_at=2021_03_10T21_10_41_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (44.2478827s)
	I0310 21:11:26.078034   18752 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (44.2480125s)
	I0310 21:11:26.096220   18752 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:11:54.253063   18752 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (28.156885s)
	I0310 21:11:54.778083   18752 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:12:18.063365   18752 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (23.2853151s)
	I0310 21:12:18.063365   18752 kubeadm.go:995] duration metric: took 1m36.2458987s to wait for elevateKubeSystemPrivileges.
	I0310 21:12:18.063365   18752 kubeadm.go:387] StartCluster complete in 13m49.4432602s
	I0310 21:12:18.063751   18752 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:12:18.064297   18752 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	I0310 21:12:18.070615   18752 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:12:18.789118   18752 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210310205436-6496" rescaled to 1
	I0310 21:12:18.789118   18752 start.go:203] Will wait 6m0s for node up to 
	I0310 21:12:18.789118   18752 addons.go:381] enableAddons start: toEnable=map[], additional=[]
	I0310 21:12:18.790703   18752 addons.go:58] Setting storage-provisioner=true in profile "newest-cni-20210310205436-6496"
	I0310 21:12:18.790703   18752 addons.go:134] Setting addon storage-provisioner=true in "newest-cni-20210310205436-6496"
	W0310 21:12:18.790703   18752 addons.go:143] addon storage-provisioner should already be in state true
	I0310 21:12:18.800360   18752 out.go:129] * Verifying Kubernetes components...
	I0310 21:12:18.795505   18752 addons.go:58] Setting default-storageclass=true in profile "newest-cni-20210310205436-6496"
	I0310 21:12:18.802833   18752 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210310205436-6496"
	I0310 21:12:18.792566   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:12:18.792566   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:12:18.792566   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:12:18.792566   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:12:18.792566   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:12:18.792713   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:12:18.792221   18752 host.go:66] Checking if "newest-cni-20210310205436-6496" exists ...
	I0310 21:12:18.917667   18752 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 21:12:19.440475   18752 cli_runner.go:115] Run: docker container inspect newest-cni-20210310205436-6496 --format={{.State.Status}}
	I0310 21:12:19.488864   18752 cli_runner.go:115] Run: docker container inspect newest-cni-20210310205436-6496 --format={{.State.Status}}
	I0310 21:12:19.847247   18752 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:19.847808   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	I0310 21:12:19.848755   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.0459236s
	I0310 21:12:19.848755   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	I0310 21:12:19.883573   18752 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:19.884244   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	I0310 21:12:19.885371   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 1.0825394s
	I0310 21:12:19.888345   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	I0310 21:12:19.888345   18752 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:19.888859   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	I0310 21:12:19.889859   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.0870269s
	I0310 21:12:19.890263   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	I0310 21:12:19.931368   18752 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:19.932181   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	I0310 21:12:19.933426   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.1305947s
	I0310 21:12:19.933703   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	I0310 21:12:20.035692   18752 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.035692   18752 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.036682   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	I0310 21:12:20.038443   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	I0310 21:12:20.038903   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.2299676s
	I0310 21:12:20.038903   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	I0310 21:12:20.039251   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.2338486s
	I0310 21:12:20.039251   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	I0310 21:12:20.126741   18752 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (1.2090759s)
	I0310 21:12:20.135690   18752 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.135690   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	I0310 21:12:20.137697   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.3340087s
	I0310 21:12:20.137697   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	I0310 21:12:20.192218   18752 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.192565   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	I0310 21:12:20.193397   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 1.3830292s
	I0310 21:12:20.193397   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	I0310 21:12:20.205801   18752 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.205801   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	I0310 21:12:20.206798   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.3896086s
	I0310 21:12:20.206798   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	I0310 21:12:20.230724   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:20.250904   18752 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.251025   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	I0310 21:12:20.253958   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 1.4211873s
	I0310 21:12:20.253958   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	I0310 21:12:20.325978   18752 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.326402   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	I0310 21:12:20.328453   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.5156537s
	I0310 21:12:20.328453   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	I0310 21:12:20.347211   18752 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.348004   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	I0310 21:12:20.348004   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.5132163s
	I0310 21:12:20.348004   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	I0310 21:12:20.349141   18752 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.349924   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	I0310 21:12:20.350270   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.5361634s
	I0310 21:12:20.350270   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	I0310 21:12:20.382516   18752 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.383006   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	I0310 21:12:20.383532   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.5463497s
	I0310 21:12:20.383532   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	I0310 21:12:20.394984   18752 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.395602   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	I0310 21:12:20.396340   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.5796504s
	I0310 21:12:20.396340   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	I0310 21:12:20.427185   18752 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.428365   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	I0310 21:12:20.428832   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.6179153s
	I0310 21:12:20.429097   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	I0310 21:12:20.497154   18752 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.498854   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	I0310 21:12:20.499560   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 1.6694549s
	I0310 21:12:20.499560   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	I0310 21:12:20.511268   18752 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.511268   18752 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.511648   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	I0310 21:12:20.512162   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	I0310 21:12:20.512162   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.7060272s
	I0310 21:12:20.512374   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	I0310 21:12:20.512374   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.6894056s
	I0310 21:12:20.512741   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	I0310 21:12:20.541742   18752 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.541742   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	I0310 21:12:20.542736   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.7297608s
	I0310 21:12:20.542736   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	I0310 21:12:20.557766   18752 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.558395   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	I0310 21:12:20.558652   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.7301322s
	I0310 21:12:20.558652   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	I0310 21:12:20.568339   18752 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.569083   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	I0310 21:12:20.569415   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.7665845s
	I0310 21:12:20.569415   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	I0310 21:12:20.577739   18752 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.578501   18752 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.578936   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	I0310 21:12:20.578936   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	I0310 21:12:20.579292   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 1.7435683s
	I0310 21:12:20.579292   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.7677266s
	I0310 21:12:20.579292   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	I0310 21:12:20.579292   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	I0310 21:12:20.588761   18752 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.588761   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	I0310 21:12:20.589919   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 1.7587043s
	I0310 21:12:20.589919   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	I0310 21:12:20.596943   18752 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.597367   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	I0310 21:12:20.597901   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.7642931s
	I0310 21:12:20.597901   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	I0310 21:12:20.621501   18752 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.621501   18752 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.621501   18752 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.622489   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	I0310 21:12:20.622489   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	I0310 21:12:20.622489   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	I0310 21:12:20.622489   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.7848785s
	I0310 21:12:20.622489   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	I0310 21:12:20.622489   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.8125695s
	I0310 21:12:20.622489   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	I0310 21:12:20.622489   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.7860556s
	I0310 21:12:20.622489   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	I0310 21:12:20.629500   18752 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.629500   18752 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.629500   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	I0310 21:12:20.630508   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.8155855s
	I0310 21:12:20.630508   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	I0310 21:12:20.629500   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	I0310 21:12:20.630508   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.7996295s
	I0310 21:12:20.630508   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	I0310 21:12:20.639749   18752 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.640027   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	I0310 21:12:20.640288   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.822154s
	I0310 21:12:20.640288   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	I0310 21:12:20.653500   18752 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.653500   18752 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:20.654589   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	I0310 21:12:20.654589   18752 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	I0310 21:12:20.654589   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.8198016s
	I0310 21:12:20.654589   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	I0310 21:12:20.655273   18752 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.8260698s
	I0310 21:12:20.656093   18752 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	I0310 21:12:20.656093   18752 cache.go:73] Successfully saved all images to host disk.
	I0310 21:12:20.691134   18752 cli_runner.go:115] Run: docker container inspect newest-cni-20210310205436-6496 --format={{.State.Status}}
	I0310 21:12:20.980910   18752 cli_runner.go:168] Completed: docker container inspect newest-cni-20210310205436-6496 --format={{.State.Status}}: (1.5400142s)
	I0310 21:12:21.185686   18752 cli_runner.go:168] Completed: docker container inspect newest-cni-20210310205436-6496 --format={{.State.Status}}: (1.6968243s)
	I0310 21:12:21.189467   18752 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 21:12:21.190077   18752 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 21:12:21.190077   18752 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0310 21:12:21.190077   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:21.328685   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0933132s)
	I0310 21:12:21.359084   18752 api_server.go:48] waiting for apiserver process to appear ...
	I0310 21:12:21.369092   18752 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 21:12:21.532575   18752 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:12:21.540310   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:21.915570   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:22.000138   18752 addons.go:134] Setting addon default-storageclass=true in "newest-cni-20210310205436-6496"
	W0310 21:12:22.000138   18752 addons.go:143] addon default-storageclass should already be in state true
	I0310 21:12:22.000138   18752 host.go:66] Checking if "newest-cni-20210310205436-6496" exists ...
	I0310 21:12:22.020212   18752 cli_runner.go:115] Run: docker container inspect newest-cni-20210310205436-6496 --format={{.State.Status}}
	I0310 21:12:22.263586   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:22.704059   18752 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	I0310 21:12:22.704059   18752 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0310 21:12:22.713759   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:23.343008   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:26.757346   18752 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.3882621s)
	I0310 21:12:26.758291   18752 api_server.go:68] duration metric: took 7.969184s to wait for apiserver process to appear ...
	I0310 21:12:26.758291   18752 api_server.go:84] waiting for apiserver healthz status ...
	I0310 21:12:26.758291   18752 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55158/healthz ...
	I0310 21:12:29.054784   18752 api_server.go:241] https://127.0.0.1:55158/healthz returned 200:
	ok
	I0310 21:12:29.122749   18752 api_server.go:137] control plane version: v1.20.5-rc.0
	I0310 21:12:29.122954   18752 api_server.go:127] duration metric: took 2.364667s to wait for apiserver health ...
	I0310 21:12:29.123059   18752 system_pods.go:41] waiting for kube-system pods to appear ...
	I0310 21:12:29.449869   18752 system_pods.go:57] 6 kube-system pods found
	I0310 21:12:29.449869   18752 system_pods.go:59] "coredns-74ff55c5b-vhpfw" [3c7d875a-5a68-4ecb-99ef-aae1e85751fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0310 21:12:29.449869   18752 system_pods.go:59] "etcd-newest-cni-20210310205436-6496" [5f925457-3d0e-4e29-9af9-457da5812eb6] Running
	I0310 21:12:29.449869   18752 system_pods.go:59] "kube-apiserver-newest-cni-20210310205436-6496" [01147280-cc7f-4f43-95ef-e3822d5f4fea] Running
	I0310 21:12:29.449869   18752 system_pods.go:59] "kube-controller-manager-newest-cni-20210310205436-6496" [07995052-37ec-48e2-846c-09ab5be5107d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0310 21:12:29.449869   18752 system_pods.go:59] "kube-proxy-lkn7l" [99ebbca4-78bb-4a8f-bef8-737d1f54f6b2] Pending
	I0310 21:12:29.449869   18752 system_pods.go:59] "kube-scheduler-newest-cni-20210310205436-6496" [1f525b74-3961-4e2b-aa2a-1696c3af3d4f] Pending
	I0310 21:12:29.449869   18752 system_pods.go:72] duration metric: took 326.8096ms to wait for pod list to return data ...
	I0310 21:12:29.449869   18752 default_sa.go:33] waiting for default service account to be created ...
	I0310 21:12:29.585168   18752 default_sa.go:44] found service account: "default"
	I0310 21:12:29.585168   18752 default_sa.go:54] duration metric: took 135.2999ms for default service account to be created ...
	I0310 21:12:29.585168   18752 kubeadm.go:541] duration metric: took 10.7960656s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0310 21:12:29.585168   18752 node_conditions.go:101] verifying NodePressure condition ...
	I0310 21:12:29.839054   18752 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	I0310 21:12:29.839437   18752 node_conditions.go:122] node cpu capacity is 4
	I0310 21:12:29.839622   18752 node_conditions.go:104] duration metric: took 254.4541ms to run NodePressure ...
	I0310 21:12:29.839622   18752 start.go:208] waiting for startup goroutines ...
	I0310 21:12:29.919698   18752 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 21:12:33.112855   18752 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0310 21:12:47.356289   18752 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (25.8237496s)
	I0310 21:12:47.357507   18752 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:12:47.357507   18752 docker.go:429] minikube-local-cache-test:functional-20210119220838-6552 wasn't preloaded
	I0310 21:12:47.357507   18752 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210119220838-6552 minikube-local-cache-test:functional-20210128021318-232 minikube-local-cache-test:functional-20210304184021-4052 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210310191609-6496 minikube-local-cache-test:functional-20210106002159-6856 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210303214129-4588 minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210213143925-7440 minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functiona
l-20210126212539-5172 minikube-local-cache-test:functional-20210225231842-5736 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210114204234-6692 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210304002630-1156 minikube-local-cache-test:functional-20210308233820-5396 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210107190945-8748]
	I0310 21:12:47.560647   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156
	I0310 21:12:47.573296   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944
	I0310 21:12:47.591639   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432
	I0310 21:12:47.593746   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052
	I0310 21:12:47.623345   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	I0310 21:12:47.639327   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396
	I0310 21:12:47.639327   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588
	I0310 21:12:47.657974   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552
	I0310 21:12:47.674059   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056
	I0310 21:12:47.697841   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440
	I0310 21:12:47.728111   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452
	I0310 21:12:47.747321   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	I0310 21:12:47.784534   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210224014800-800
	I0310 21:12:47.805452   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	I0310 21:12:47.848672   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	I0310 21:12:47.868695   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120214442-10992
	I0310 21:12:47.870760   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172
	I0310 21:12:47.902259   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516
	I0310 21:12:47.906814   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	I0310 21:12:47.923463   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210123004019-5372
	I0310 21:12:47.928202   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120022529-1140
	I0310 21:12:47.956178   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	I0310 21:12:47.963166   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	I0310 21:12:47.998199   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	I0310 21:12:48.064967   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520
	I0310 21:12:48.074644   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736
	I0310 21:12:48.088452   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	W0310 21:12:48.092246   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:12:48.108649   18752 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	I0310 21:12:48.112658   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024
	I0310 21:12:48.135760   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310083645-5040
	I0310 21:12:48.151248   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692
	W0310 21:12:48.174652   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:12:48.195793   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210128021318-232
	I0310 21:12:48.226996   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352
	I0310 21:12:48.245445   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310191609-6496
	I0310 21:12:48.257949   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464
	I0310 21:12:48.267306   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	W0310 21:12:48.277733   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:12:48.351732   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	I0310 21:12:48.370641   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920
	I0310 21:12:48.388773   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	I0310 21:12:48.388773   18752 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	W0310 21:12:48.443070   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:12:48.451259   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:12:48.451715   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	I0310 21:12:48.451913   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:12:48.451913   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:12:48.465810   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:12:48.465810   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	I0310 21:12:48.465810   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:12:48.465810   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:12:48.473883   18752 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210301195830-5700
	I0310 21:12:48.480057   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:12:48.480057   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:12:48.491637   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:12:48.492632   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	I0310 21:12:48.492632   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:12:48.492632   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:12:48.519108   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	W0310 21:12:48.538349   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:12:48.555402   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:12:48.580997   18752 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:12:48.610702   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:12:48.610702   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	I0310 21:12:48.611057   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:12:48.611338   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:12:48.631430   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:12:48.660144   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:12:48.660348   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	I0310 21:12:48.660668   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:12:48.660668   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:12:48.672287   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:12:48.681108   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:12:48.682325   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	I0310 21:12:48.682769   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:12:48.682769   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:12:48.689153   18752 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:12:48.689153   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	I0310 21:12:48.689574   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:12:48.689574   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:12:48.701837   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:12:48.702845   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	W0310 21:12:51.262622   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.262622   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.262622   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: "minikube-local-cache-test:functional-20210120214442-10992" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.262622   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:12:51.263118   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.263118   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: "minikube-local-cache-test:functional-20210123004019-5372" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.263118   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: "minikube-local-cache-test:functional-20210120231122-7024" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.263118   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:12:51.263118   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:12:51.263118   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:12:51.263118   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	W0310 21:12:51.263344   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.263344   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: "minikube-local-cache-test:functional-20210224014800-800" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.263344   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:12:51.263608   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	W0310 21:12:51.263608   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.263773   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: NewSession: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.264568   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: "minikube-local-cache-test:functional-20210126212539-5172" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.262622   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.263773   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.264031   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.264031   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.263118   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: "minikube-local-cache-test:functional-20210115023213-8464" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:12:51.263118   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:12:51.264031   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.264395   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	I0310 21:12:51.265396   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: "minikube-local-cache-test:functional-20210219220622-3920" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.265396   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:12:51.265396   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:12:51.265396   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	I0310 21:12:51.265689   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: "minikube-local-cache-test:functional-20210301195830-5700" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.265800   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:12:51.265800   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:12:51.265800   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:12:51.266048   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:12:51.265396   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.265103   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:12:51.266741   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: "minikube-local-cache-test:functional-20210212145109-352" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: "minikube-local-cache-test:functional-20210128021318-232" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: "minikube-local-cache-test:functional-20210310191609-6496" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: "minikube-local-cache-test:functional-20210225231842-5736" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.265103   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: "minikube-local-cache-test:functional-20210120022529-1140" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: "minikube-local-cache-test:functional-20210114204234-6692" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: "minikube-local-cache-test:functional-20210115191024-3516" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.265396   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: "minikube-local-cache-test:functional-20210310083645-5040" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.265396   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.265103   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.267060   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:12:51.267060   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	I0310 21:12:51.267060   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.267060   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:12:51.278344   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:12:51.267060   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:12:51.279628   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: "minikube-local-cache-test:functional-20210219145454-9520" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:12:51.279941   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:12:51.279941   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:12:51.267060   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:12:51.280346   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:12:51.280554   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	I0310 21:12:51.282162   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:12:51.282162   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:12:51.266741   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:12:51.282481   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:12:51.282481   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:12:51.282680   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	I0310 21:12:51.282680   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	I0310 21:12:51.283252   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:12:51.282162   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:12:51.284541   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:12:51.282481   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:12:51.284736   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:12:51.282481   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:12:51.285901   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:12:51.469515   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:12:51.484589   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:12:51.533483   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:12:51.629820   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:12:51.706342   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 21:12:51.707710   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.708397   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.708896   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.732967   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.733363   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.733561   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.743681   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:12:51.745735   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.753348   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:12:51.755571   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:12:51.756103   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:12:51.762547   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 21:12:51.775210   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:12:51.775919   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:12:51.800100   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:12:51.810091   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:12:51.813706   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 21:12:51.813706   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.813706   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:12:51.813706   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:12:51.820771   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.827809   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.860588   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.868441   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.886285   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.908114   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.919653   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.930053   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.945930   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.949437   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.952434   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.948834   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.953370   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:51.975870   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:53.395822   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.662745s)
	I0310 21:12:53.396719   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.485017   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6713142s)
	I0310 21:12:53.485539   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.496519   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7629602s)
	I0310 21:12:53.496793   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.512111   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.6511935s)
	I0310 21:12:53.512111   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.541713   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7209444s)
	I0310 21:12:53.541952   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.590524   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8569656s)
	I0310 21:12:53.590700   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.621272   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.7934655s)
	I0310 21:12:53.621687   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.642607   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.664234s)
	I0310 21:12:53.642607   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.668750   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8003122s)
	I0310 21:12:53.668963   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.688746   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.9810395s)
	I0310 21:12:53.689097   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.758147   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8047789s)
	I0310 21:12:53.758745   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.759520   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8398693s)
	I0310 21:12:53.760251   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.770222   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8401715s)
	I0310 21:12:53.770667   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.785284   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8990014s)
	I0310 21:12:53.785776   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.817210   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.8868726s)
	I0310 21:12:53.817447   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.818574   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0728419s)
	I0310 21:12:53.818919   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.843754   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.13536s)
	I0310 21:12:53.844110   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.883965   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1750717s)
	I0310 21:12:53.884398   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.958209   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0084306s)
	I0310 21:12:53.959831   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0089094s)
	I0310 21:12:53.959831   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.960273   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.966133   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0137022s)
	I0310 21:12:53.968446   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:53.994078   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0407109s)
	I0310 21:12:53.994078   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.0316693s)
	I0310 21:12:54.009966   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (2.1018557s)
	I0310 21:12:54.010638   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:54.010638   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:57.361413   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:12:57.361612   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:12:57.361612   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:12:57.361808   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:12:57.361808   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:12:57.361612   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:12:57.361808   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:12:57.361808   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:12:57.361808   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:12:57.361808   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:12:57.361808   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:12:57.362006   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:12:57.362006   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:12:57.361612   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:12:57.362270   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:12:57.361808   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:12:57.362625   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:12:57.363064   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:12:57.363064   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:12:57.361413   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:12:57.363337   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:12:57.361612   18752 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:12:57.361808   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:12:57.364314   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:12:57.364462   18752 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:12:57.367708   18752 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:12:57.466731   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:12:57.476748   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:57.519791   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:12:57.540705   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:57.552653   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:12:57.560773   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:12:57.562999   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:57.563922   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:12:57.566901   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:12:57.579023   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:57.579023   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:12:57.590099   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:57.591785   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:12:57.592789   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:12:57.596786   18752 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:12:57.597202   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:57.601772   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:57.608041   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:57.617147   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:57.625602   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:12:58.454521   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:58.501263   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0245158s)
	I0310 21:12:58.501555   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:58.513022   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:58.563355   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0226521s)
	I0310 21:12:58.563355   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:58.637870   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0294542s)
	I0310 21:12:58.638141   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:58.653923   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0281021s)
	I0310 21:12:58.653923   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:58.679155   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0890575s)
	I0310 21:12:58.679155   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:58.686201   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.0690557s)
	I0310 21:12:58.687282   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1513309s)
	I0310 21:12:58.753102   18752 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496: (1.1559017s)
	I0310 21:12:58.753709   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:12:58.753709   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	W0310 21:13:04.744994   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:13:04.744994   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:13:04.745224   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	I0310 21:13:04.758785   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	W0310 21:13:05.269849   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:13:05.269849   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:13:05.269849   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	I0310 21:13:05.285833   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:13:05.377486   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:13:05.926683   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	W0310 21:13:10.795233   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:13:15.085213   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:13:15.085485   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:13:15.085915   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	I0310 21:13:15.101929   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:13:15.738383   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	W0310 21:13:18.491146   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:13:18.491490   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:13:18.493152   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	I0310 21:13:18.512811   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:13:19.150277   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	W0310 21:13:19.460446   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:13:19.460446   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:13:19.461481   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	I0310 21:13:19.466948   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:13:20.136532   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	W0310 21:13:22.320407   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:13:22.320807   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:13:22.321052   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	I0310 21:13:22.321052   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	W0310 21:13:22.804709   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:13:22.804709   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:13:22.804709   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056 (4096 bytes)
	I0310 21:13:22.814336   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:13:23.012763   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:13:23.412759   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	W0310 21:13:25.084062   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:13:25.084219   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:13:25.084994   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	I0310 21:13:25.102916   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	W0310 21:13:25.205188   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:13:25.205188   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:13:25.205631   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	W0310 21:13:25.207930   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:13:25.207930   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:13:25.207930   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	I0310 21:13:25.214321   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:13:25.217320   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:13:25.840746   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:13:25.953687   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:13:25.974626   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	W0310 21:13:26.192805   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:13:26.864421   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:13:30.436709   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:13:30.612292   18752 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:16:06.576487   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:16:06.594836   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:19:02.316430   18752 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6m29.2041194s)
	I0310 21:19:02.316430   18752 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6m32.3972814s)
	I0310 21:19:02.316430   18752 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396: (6m14.6776274s)
	I0310 21:19:02.316813   18752 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944: (6m14.7440415s)
	I0310 21:19:02.316813   18752 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056: (6m14.6432792s)
	I0310 21:19:02.316430   18752 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156: (6m14.756307s)
	I0310 21:19:02.316813   18752 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052: (6m14.723592s)
	I0310 21:19:02.317365   18752 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552: (6m14.6599159s)
	I0310 21:19:02.317365   18752 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440: (6m14.6197936s)
	I0310 21:19:02.517550   18752 out.go:129] * Enabled addons: default-storageclass, storage-provisioner
	I0310 21:19:02.316813   18752 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432: (6m14.7256994s)
	I0310 21:19:02.517550   18752 addons.go:383] enableAddons completed in 6m43.7289972s
	I0310 21:19:02.317365   18752 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588: (6m14.6785629s)
	I0310 21:19:02.317812   18752 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452: (6m14.590226s)
	I0310 21:19:02.317812   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: (6m10.833743s)
	I0310 21:19:02.318241   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: (6m10.6889397s)
	I0310 21:19:02.318241   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: (6m10.5050539s)
	I0310 21:19:02.318241   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (6m10.5626569s)
	I0310 21:19:02.318241   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: (6m10.5435495s)
	I0310 21:19:02.318241   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: (6m10.784735s)
	I0310 21:19:02.318670   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: (6m10.5054829s)
	I0310 21:19:02.318670   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: (6m10.8496742s)
	I0310 21:19:02.318670   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (6m10.6128462s)
	I0310 21:19:02.318670   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (6m10.5755077s)
	I0310 21:19:02.318670   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: (6m10.5188578s)
	I0310 21:19:02.318670   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: (6m4.7950892s)
	I0310 21:19:02.319060   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (6m4.7526703s)
	I0310 21:19:02.319060   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (6m4.7277856s)
	I0310 21:19:02.319060   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (6m4.7226448s)
	I0310 21:19:02.319060   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: (6m4.8528403s)
	I0310 21:19:02.319060   18752 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (6m4.7267817s)
	I0310 21:19:02.319473   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: (2m55.7248827s)
	I0310 21:19:02.518819   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372': No such file or directory
	I0310 21:19:02.518819   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 from cache
	I0310 21:19:02.519167   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:19:02.519167   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800': No such file or directory
	I0310 21:19:02.519167   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372 (4096 bytes)
	I0310 21:19:02.519167   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944 (4096 bytes)
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800 (4096 bytes)
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464': No such file or directory
	I0310 21:19:02.520101   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052 (4096 bytes)
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700': No such file or directory
	I0310 21:19:02.520101   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464 (4096 bytes)
	I0310 21:19:02.520101   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700 (4096 bytes)
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692': No such file or directory
	I0310 21:19:02.520782   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552 (4096 bytes)
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588': No such file or directory
	I0310 21:19:02.519578   18752 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024': No such file or directory
	I0310 21:19:02.520101   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440 (4096 bytes)
	I0310 21:19:02.520959   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396 (4096 bytes)
	I0310 21:19:02.520959   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352 (4096 bytes)
	I0310 21:19:02.520959   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520 (4096 bytes)
	I0310 21:19:02.520959   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992 (4096 bytes)
	I0310 21:19:02.520959   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588 (4096 bytes)
	I0310 21:19:02.521722   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	I0310 21:19:02.520782   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692 (4096 bytes)
	I0310 21:19:02.520959   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024 (4096 bytes)
	I0310 21:19:02.521722   18752 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040 (4096 bytes)
	I0310 21:19:02.531992   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	W0310 21:19:02.708609   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:19:02.708609   18752 retry.go:31] will retry after 276.165072ms: ssh: rejected: connect failed (open failed)
	W0310 21:19:02.708609   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:19:02.708609   18752 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	W0310 21:19:02.708609   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:19:02.708609   18752 retry.go:31] will retry after 291.140013ms: ssh: rejected: connect failed (open failed)
	W0310 21:19:02.708609   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:19:02.709185   18752 retry.go:31] will retry after 234.428547ms: ssh: rejected: connect failed (open failed)
	W0310 21:19:02.709185   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:19:02.709185   18752 retry.go:31] will retry after 231.159374ms: ssh: rejected: connect failed (open failed)
	W0310 21:19:02.709185   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:19:02.709185   18752 retry.go:31] will retry after 296.705768ms: ssh: rejected: connect failed (open failed)
	W0310 21:19:02.709185   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:19:02.709185   18752 retry.go:31] will retry after 141.409254ms: ssh: rejected: connect failed (open failed)
	W0310 21:19:02.709185   18752 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:19:02.709436   18752 retry.go:31] will retry after 164.129813ms: ssh: rejected: connect failed (open failed)
	I0310 21:19:02.862093   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:19:02.881244   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:19:02.948464   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:19:02.948464   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:19:03.016586   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:19:03.021769   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:19:03.024425   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:19:03.083898   18752 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20210310205436-6496
	I0310 21:19:03.803895   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:19:03.832271   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:19:03.860832   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:19:03.916312   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:19:03.918858   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:19:03.941149   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:19:03.982095   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:19:03.994302   18752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55161 SSHKeyPath:C:\Users\jenkins\.minikube\machines\newest-cni-20210310205436-6496\id_rsa Username:docker}
	I0310 21:19:46.600720   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: (44.0680779s)
	I0310 21:19:46.600720   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 from cache
	I0310 21:19:46.600955   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:19:46.616637   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:20:07.091206   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: (20.4745956s)
	I0310 21:20:07.092206   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 from cache
	I0310 21:20:07.092412   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:20:07.108291   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:20:30.266769   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: (23.1585086s)
	I0310 21:20:30.267299   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 from cache
	I0310 21:20:30.267299   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:20:30.277367   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:21:20.057282   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: (49.7799794s)
	I0310 21:21:20.057282   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 from cache
	I0310 21:21:20.057639   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:21:20.066298   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:21:43.260843   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (23.1943659s)
	I0310 21:21:43.261127   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 from cache
	I0310 21:21:43.261127   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:21:43.270290   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:22:06.633173   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: (23.3629134s)
	I0310 21:22:06.633404   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 from cache
	I0310 21:22:06.633404   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 21:22:06.641408   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 21:22:20.793892   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: (14.1522317s)
	I0310 21:22:20.794091   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 from cache
	I0310 21:22:20.794091   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:22:20.810528   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:22:59.326311   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: (38.5158326s)
	I0310 21:22:59.326311   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 from cache
	I0310 21:22:59.326311   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:22:59.333220   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:23:22.968815   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (23.6351909s)
	I0310 21:23:22.968815   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 from cache
	I0310 21:23:22.968815   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:23:22.976991   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:23:47.823942   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: (24.8469828s)
	I0310 21:23:47.823942   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 from cache
	I0310 21:23:47.823942   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:23:47.834557   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:24:02.097968   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (14.2634299s)
	I0310 21:24:02.098723   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 from cache
	I0310 21:24:02.098723   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:24:02.113147   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:24:18.608238   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (16.4945819s)
	I0310 21:24:18.608521   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 from cache
	I0310 21:24:18.608677   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:24:18.619219   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:24:33.697645   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: (15.0785415s)
	I0310 21:24:33.697938   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 from cache
	I0310 21:24:33.698225   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 21:24:33.719923   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800

                                                
                                                
** /stderr **
start_stop_delete_test.go:157: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-20210310205436-6496 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.20.5-rc.0": exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect newest-cni-20210310205436-6496
helpers_test.go:231: (dbg) docker inspect newest-cni-20210310205436-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e430f48f8334e2f39cce27878f96427388bf7a8d81e500242401e12e2e0754e7",
	        "Created": "2021-03-10T20:55:00.0211299Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238058,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:55:04.274266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/e430f48f8334e2f39cce27878f96427388bf7a8d81e500242401e12e2e0754e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e430f48f8334e2f39cce27878f96427388bf7a8d81e500242401e12e2e0754e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/e430f48f8334e2f39cce27878f96427388bf7a8d81e500242401e12e2e0754e7/hosts",
	        "LogPath": "/var/lib/docker/containers/e430f48f8334e2f39cce27878f96427388bf7a8d81e500242401e12e2e0754e7/e430f48f8334e2f39cce27878f96427388bf7a8d81e500242401e12e2e0754e7-json.log",
	        "Name": "/newest-cni-20210310205436-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20210310205436-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d081a34acaceef908092ff53cbd483d0359cbc9c3a8c8eae07001c2f5cd781b2-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d081a34acaceef908092ff53cbd483d0359cbc9c3a8c8eae07001c2f5cd781b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d081a34acaceef908092ff53cbd483d0359cbc9c3a8c8eae07001c2f5cd781b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d081a34acaceef908092ff53cbd483d0359cbc9c3a8c8eae07001c2f5cd781b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20210310205436-6496",
	                "Source": "/var/lib/docker/volumes/newest-cni-20210310205436-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20210310205436-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20210310205436-6496",
	                "name.minikube.sigs.k8s.io": "newest-cni-20210310205436-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2c089c3a3d75f725280b7fae7215a963bd51456490b30b192b64a494643c40c1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55161"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55157"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55159"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55158"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2c089c3a3d75",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "14ca06490fb6f7ac3bdafdaa45a51a559dcf4bc3840e9b498dfc2d435b1a5835",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "14ca06490fb6f7ac3bdafdaa45a51a559dcf4bc3840e9b498dfc2d435b1a5835",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210310205436-6496 -n newest-cni-20210310205436-6496

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20210310205436-6496 -n newest-cni-20210310205436-6496: (19.3511613s)
helpers_test.go:240: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-20210310205436-6496 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-20210310205436-6496 logs -n 25: (2m41.3098062s)
helpers_test.go:248: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:55:14 UTC, end at Wed 2021-03-10 21:25:37 UTC. --
	* Mar 10 20:57:55 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T20:57:55.036006000Z" level=info msg="Loading containers: done."
	* Mar 10 20:57:55 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T20:57:55.271618400Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 20:57:55 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T20:57:55.271756200Z" level=info msg="Daemon has completed initialization"
	* Mar 10 20:57:55 newest-cni-20210310205436-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 20:57:55 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T20:57:55.911083300Z" level=info msg="API listen on [::]:2376"
	* Mar 10 20:57:56 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T20:57:56.001642900Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 21:00:52 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:00:52.805369200Z" level=info msg="ignoring event" container=3b96ace15e2c60f108475d86e8ffcea1eb5760bfb93ac18fb603f0526d6a4207 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:01:05 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:01:05.488808800Z" level=info msg="ignoring event" container=a1901a7de095837430b9d888f27f9cd5c520deda6f7c0909ae218e60397842b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:03:20 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:03:20.881677800Z" level=info msg="ignoring event" container=5f3b63e36a30640535d24ac4501f54232454c3d7c28a6884606eb0173fb456bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:04:44 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:04:44.882340600Z" level=info msg="ignoring event" container=ebec12ad3a55b57b586b25a8de7b1637101750483d57e637b2a881a4c97d9115 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:04:54 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:04:54.836644400Z" level=info msg="ignoring event" container=0982946e689230debefe9a6257d1586afd5cdba91a8180011acc40121290e74f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:05:07 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:05:07.029735000Z" level=info msg="ignoring event" container=cea65ec27d3cbe01f1a56241ff4ed8dc07a6cee80e9d6e259d3423bf33d103bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:05:19 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:05:19.216497900Z" level=info msg="ignoring event" container=45cb081960421b293e755b961e999219470826e722b2c1cb886814228b487d47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:05:28 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:05:28.659474300Z" level=info msg="ignoring event" container=224ee69ca5eb0ef72fee94d9a85d9ae2d5cf36ab1955154203f95a98d092fd80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:05:36 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:05:36.522358900Z" level=info msg="ignoring event" container=1966e07945ad50525dc23b95e4147161dbd9add6b397fc6c71de6b51a19d051d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:05:42 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:05:42.110116100Z" level=info msg="ignoring event" container=04d2f5dfb0dda8076212035dca5977937199280ae01d2e7b6cf0513ab32f4655 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:09:53 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:09:53.186124300Z" level=info msg="ignoring event" container=c35b00ce50f5b61c4244f47fb90f203de13f86d458d8982a596081b77afba854 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:17:55 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:17:55.459250800Z" level=info msg="ignoring event" container=0a2545834c57a9de57ee07634294cf5b3e2e4a59322ecc4297d6a2a663137494 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:21:04 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:21:04.241027800Z" level=info msg="ignoring event" container=a1e87d12944f6790f0aedcce43b060aeb942360e00a104a41ea7b0619aa63464 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:21:21 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:21:21.545487800Z" level=info msg="ignoring event" container=c82bc6e984bc86225842b39262095be456b011c3d255efa1414a9b5601d427b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:21:22 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:21:22.302963100Z" level=info msg="Container c82bc6e984bc86225842b39262095be456b011c3d255efa1414a9b5601d427b4 failed to exit within 10 seconds of signal 15 - using the force"
	* Mar 10 21:21:29 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:21:29.204755600Z" level=warning msg="Container c82bc6e984bc86225842b39262095be456b011c3d255efa1414a9b5601d427b4 is not running"
	* Mar 10 21:23:09 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:23:09.025128900Z" level=info msg="ignoring event" container=ea08ce5b55479e52a4a3fca9b7e1d692103b141691c105891c2d40500b7417f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:23:43 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:23:43.192513400Z" level=info msg="ignoring event" container=4699b3c19ed8a3e028f67bbbcdf76ff565e076719fcdcfa9e4ff15af35259479 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:25:20 newest-cni-20210310205436-6496 dockerd[750]: time="2021-03-10T21:25:20.015042100Z" level=info msg="ignoring event" container=ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	* 9f1d686b291b8       85069258b98ac       About a minute ago   Running             storage-provisioner       2                   abefbc3b04058
	* ea08ce5b55479       85069258b98ac       4 minutes ago        Exited              storage-provisioner       1                   abefbc3b04058
	* 679d93c56f805       455e87ddc0114       10 minutes ago       Running             kube-proxy                0                   96d79b8af9247
	* be1eb624d1fce       a95b4e4b41d89       15 minutes ago       Running             kube-controller-manager   1                   a1a53b50e13e4
	* 6c1339c9f05a4       4968524da7559       16 minutes ago       Running             kube-scheduler            0                   dd243c1f9e16c
	* c35b00ce50f5b       a95b4e4b41d89       17 minutes ago       Exited              kube-controller-manager   0                   a1a53b50e13e4
	* 25752d3a91a3d       17a1e6e90a9b4       17 minutes ago       Running             kube-apiserver            0                   3dc27735893bc
	* 9c9bce165565e       0369cf4303ffd       17 minutes ago       Running             etcd                      0                   2b5ab5deb9192
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20210310205436-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=newest-cni-20210310205436-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=newest-cni-20210310205436-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T21_10_41_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 21:09:13 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  newest-cni-20210310205436-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 21:26:00 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 21:21:59 +0000   Wed, 10 Mar 2021 21:09:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 21:21:59 +0000   Wed, 10 Mar 2021 21:09:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 21:21:59 +0000   Wed, 10 Mar 2021 21:09:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 21:21:59 +0000   Wed, 10 Mar 2021 21:14:06 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.4
	*   Hostname:    newest-cni-20210310205436-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                717e1315-7c67-4da6-8203-b8cc8005bf39
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.5-rc.0
	*   Kube-Proxy Version:         v1.20.5-rc.0
	* PodCIDR:                      192.168.0.0/24
	* PodCIDRs:                     192.168.0.0/24
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-74ff55c5b-vhpfw                                   100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	*   kube-system                 etcd-newest-cni-20210310205436-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	*   kube-system                 kube-apiserver-newest-cni-20210310205436-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         16m
	*   kube-system                 kube-controller-manager-newest-cni-20210310205436-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	*   kube-system                 kube-proxy-lkn7l                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	*   kube-system                 kube-scheduler-newest-cni-20210310205436-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	*   kube-system                 storage-provisioner                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age    From        Message
	*   ----    ------                   ----   ----        -------
	*   Normal  Starting                 14m    kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  14m    kubelet     Node newest-cni-20210310205436-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    14m    kubelet     Node newest-cni-20210310205436-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     14m    kubelet     Node newest-cni-20210310205436-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             14m    kubelet     Node newest-cni-20210310205436-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  12m    kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                12m    kubelet     Node newest-cni-20210310205436-6496 status is now: NodeReady
	*   Normal  Starting                 9m26s  kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [9c9bce165565] <==
	* 2021-03-10 21:23:44.895525 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true " with result "range_response_count:0 size:7" took too long (257.4813ms) to execute
	* 2021-03-10 21:23:47.588174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:23:57.227402 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:23:57.817695 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-74ff55c5b-vhpfw.166b17e19e5e5f88\" " with result "range_response_count:1 size:794" took too long (197.6862ms) to execute
	* 2021-03-10 21:24:06.629474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:16.709985 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:27.583820 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:36.640249 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:48.487544 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:57.690072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:58.303111 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" count_only:true " with result "range_response_count:0 size:7" took too long (397.3128ms) to execute
	* 2021-03-10 21:24:58.312719 W | etcdserver: read-only range request "key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true " with result "range_response_count:0 size:5" took too long (128.9229ms) to execute
	* 2021-03-10 21:25:06.642037 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:25:11.703218 W | etcdserver: read-only range request "key:\"/registry/masterleases/172.17.0.4\" " with result "range_response_count:1 size:129" took too long (266.021ms) to execute
	* 2021-03-10 21:25:17.542836 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:25:26.346622 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (106.7556ms) to execute
	* 2021-03-10 21:25:27.205049 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:25:37.379226 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:25:46.906693 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:26:01.623866 W | etcdserver: request "header:<ID:912955419576509837 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:0cab781df7a0258c>" with result "size:40" took too long (133.6854ms) to execute
	* 2021-03-10 21:26:01.649720 W | etcdserver: read-only range request "key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true " with result "range_response_count:0 size:5" took too long (157.1883ms) to execute
	* 2021-03-10 21:26:02.791400 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:26:06.606990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:26:11.928947 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:418" took too long (318.1221ms) to execute
	* 2021-03-10 21:26:17.470088 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  21:26:23 up  2:26,  0 users,  load average: 144.22, 134.58, 138.73
	* Linux newest-cni-20210310205436-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [25752d3a91a3] <==
	* I0310 21:25:31.529476       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:25:31.529624       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:25:31.529645       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 21:26:00.838673       1 trace.go:205] Trace[119873502]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/newest-cni-20210310205436-6496,user-agent:kubelet/v1.20.5 (linux/amd64) kubernetes/9fdbacd,client:172.17.0.4 (10-Mar-2021 21:26:00.271) (total time: 564ms):
	* Trace[119873502]: ---"About to convert to expected version" 439ms (21:26:00.711)
	* Trace[119873502]: ---"Object stored in database" 109ms (21:26:00.821)
	* Trace[119873502]: [564.1272ms] [564.1272ms] END
	* I0310 21:26:09.108303       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:26:09.108444       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:26:09.108499       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 21:26:21.108627       1 trace.go:205] Trace[1343797535]: "GuaranteedUpdate etcd3" type:*core.Endpoints (10-Mar-2021 21:26:20.293) (total time: 815ms):
	* Trace[1343797535]: ---"Transaction committed" 809ms (21:26:00.108)
	* Trace[1343797535]: [815.3331ms] [815.3331ms] END
	* I0310 21:26:21.109185       1 trace.go:205] Trace[325325840]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.4 (10-Mar-2021 21:26:20.245) (total time: 864ms):
	* Trace[325325840]: ---"Object stored in database" 816ms (21:26:00.108)
	* Trace[325325840]: [864.053ms] [864.053ms] END
	* I0310 21:26:25.882161       1 trace.go:205] Trace[109830964]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:26:25.344) (total time: 537ms):
	* Trace[109830964]: ---"initial value restored" 209ms (21:26:00.553)
	* Trace[109830964]: ---"Transaction committed" 295ms (21:26:00.882)
	* Trace[109830964]: [537.7197ms] [537.7197ms] END
	* I0310 21:26:32.998276       1 trace.go:205] Trace[142901603]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:26:31.644) (total time: 1353ms):
	* Trace[142901603]: ---"initial value restored" 410ms (21:26:00.055)
	* Trace[142901603]: ---"Transaction prepared" 726ms (21:26:00.781)
	* Trace[142901603]: ---"Transaction committed" 216ms (21:26:00.997)
	* Trace[142901603]: [1.3532349s] [1.3532349s] END
	* 
	* ==> kube-controller-manager [be1eb624d1fc] <==
	* I0310 21:11:54.638877       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	* I0310 21:11:54.899058       1 shared_informer.go:247] Caches are synced for attach detach 
	* I0310 21:11:54.919167       1 shared_informer.go:247] Caches are synced for crt configmap 
	* I0310 21:11:54.942162       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 21:11:54.942195       1 disruption.go:339] Sending events to api server.
	* I0310 21:11:54.942292       1 shared_informer.go:247] Caches are synced for deployment 
	* I0310 21:11:55.046656       1 event.go:291] "Event occurred" object="newest-cni-20210310205436-6496" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20210310205436-6496 event: Registered Node newest-cni-20210310205436-6496 in Controller"
	* I0310 21:11:55.097244       1 shared_informer.go:247] Caches are synced for TTL 
	* I0310 21:11:55.164589       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:11:55.170975       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:11:57.780098       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 21:11:59.547568       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 21:11:59.656279       1 range_allocator.go:373] Set node newest-cni-20210310205436-6496 PodCIDR to [192.168.0.0/24]
	* I0310 21:12:00.697588       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:12:00.793971       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:12:00.794014       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 21:12:02.886149       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	* E0310 21:12:06.350258       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	* I0310 21:12:07.084068       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vhpfw"
	* I0310 21:12:07.085833       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lkn7l"
	* I0310 21:12:10.435774       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tk6t8"
	* I0310 21:12:19.193087       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	* I0310 21:12:20.087151       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-tk6t8"
	* I0310 21:14:25.235459       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* http2: server: error reading preface from client 127.0.0.1:43186: read tcp 127.0.0.1:10257->127.0.0.1:43186: read: connection reset by peer
	* 
	* ==> kube-controller-manager [c35b00ce50f5] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	* created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:171 +0x28b
	* 
	* goroutine 112 [select]:
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0003cc260, 0xc0003587d0, 0xc0006a02a0, 0x0, 0x0)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539 +0x11d
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc0003587d0, 0xc0001120c0, 0x0, 0x0)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492 +0xc5
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc0003587d0, 0xc0001120c0, 0x0, 0x47b2620)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511 +0xb3
	* created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:174 +0x2f9
	* 
	* goroutine 145 [select]:
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc0001120c0, 0xc000358850, 0x4e640e0, 0xc00069e180)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0xbd
	* created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c
	* 
	* goroutine 146 [select]:
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc0006a0360, 0xdf8475800, 0x0, 0xc0006a0300)
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588 +0x17b
	* created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571 +0x8c
	* 
	* ==> kube-proxy [679d93c56f80] <==
	* I0310 21:16:39.965487       1 node.go:172] Successfully retrieved node IP: 172.17.0.4
	* I0310 21:16:39.991000       1 server_others.go:142] kube-proxy node IP is an IPv4 address (172.17.0.4), assume IPv4 operation
	* W0310 21:16:42.445856       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 21:16:42.446149       1 server_others.go:185] Using iptables Proxier.
	* I0310 21:16:42.483731       1 server.go:650] Version: v1.20.5-rc.0
	* I0310 21:16:42.485317       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 21:16:42.493765       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 21:16:42.493840       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 21:16:42.549948       1 config.go:315] Starting service config controller
	* I0310 21:16:42.550007       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 21:16:42.550087       1 config.go:224] Starting endpoint slice config controller
	* I0310 21:16:42.550100       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 21:16:43.939625       1 shared_informer.go:247] Caches are synced for service config 
	* I0310 21:16:44.350283       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 21:16:58.793005       1 trace.go:205] Trace[851071683]: "iptables restore" (10-Mar-2021 21:16:55.761) (total time: 3031ms):
	* Trace[851071683]: [3.031754s] [3.031754s] END
	* I0310 21:20:53.785573       1 trace.go:205] Trace[508372399]: "iptables Monitor CANARY check" (10-Mar-2021 21:20:48.238) (total time: 5425ms):
	* Trace[508372399]: [5.4252769s] [5.4252769s] END
	* I0310 21:21:20.286110       1 trace.go:205] Trace[1759794379]: "iptables Monitor CANARY check" (10-Mar-2021 21:21:18.235) (total time: 2050ms):
	* Trace[1759794379]: [2.0506458s] [2.0506458s] END
	* I0310 21:26:23.457325       1 trace.go:205] Trace[1118793638]: "iptables Monitor CANARY check" (10-Mar-2021 21:26:18.352) (total time: 4971ms):
	* Trace[1118793638]: [4.9711749s] [4.9711749s] END
	* 
	* ==> kube-scheduler [6c1339c9f05a] <==
	* I0310 21:10:25.967940       1 serving.go:331] Generated self-signed cert in-memory
	* I0310 21:10:37.673570       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I0310 21:10:37.673794       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	* I0310 21:10:37.673829       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	* I0310 21:10:37.673867       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I0310 21:10:37.691784       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 21:10:37.691808       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0310 21:10:37.691840       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	* I0310 21:10:37.691847       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	* I0310 21:10:37.979798       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	* I0310 21:10:37.980023       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* I0310 21:10:38.094218       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	* http2: server: error reading preface from client 127.0.0.1:34704: read tcp 127.0.0.1:10259->127.0.0.1:34704: read: connection reset by peer
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:55:14 UTC, end at Wed 2021-03-10 21:27:28 UTC. --
	* Mar 10 21:23:51 newest-cni-20210310205436-6496 kubelet[6163]: W0310 21:23:51.843479    6163 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-74ff55c5b-vhpfw_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "4699b3c19ed8a3e028f67bbbcdf76ff565e076719fcdcfa9e4ff15af35259479"
	* Mar 10 21:23:54 newest-cni-20210310205436-6496 kubelet[6163]: W0310 21:23:54.382817    6163 pod_container_deletor.go:79] Container "4699b3c19ed8a3e028f67bbbcdf76ff565e076719fcdcfa9e4ff15af35259479" not found in pod's containers
	* Mar 10 21:23:57 newest-cni-20210310205436-6496 kubelet[6163]: W0310 21:23:57.944394    6163 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "4699b3c19ed8a3e028f67bbbcdf76ff565e076719fcdcfa9e4ff15af35259479"
	* Mar 10 21:24:05 newest-cni-20210310205436-6496 kubelet[6163]: E0310 21:24:05.216604    6163 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/podc15c256a-ab16-472a-b7ad-74a292ee95da/9f1d686b291b851e79d11c76ddae50ec9eccb0dfc362c67f6dc73b6c30e8b30e": RecentStats: unable to find data in memory cache]
	* Mar 10 21:24:31 newest-cni-20210310205436-6496 kubelet[6163]: E0310 21:24:31.613985    6163 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/16da216f767ef0c1cfb07db0cbed9c9e0585655b59518ecf8f380b17d1ff88ed/diff" to get inode usage: stat /var/lib/docker/overlay2/16da216f767ef0c1cfb07db0cbed9c9e0585655b59518ecf8f380b17d1ff88ed/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/4699b3c19ed8a3e028f67bbbcdf76ff565e076719fcdcfa9e4ff15af35259479" to get inode usage: stat /var/lib/docker/containers/4699b3c19ed8a3e028f67bbbcdf76ff565e076719fcdcfa9e4ff15af35259479: no such file or directory
	* Mar 10 21:24:50 newest-cni-20210310205436-6496 kubelet[6163]: E0310 21:24:50.093970    6163 kuberuntime_manager.go:965] PodSandboxStatus of sandbox "4699b3c19ed8a3e028f67bbbcdf76ff565e076719fcdcfa9e4ff15af35259479" for pod "coredns-74ff55c5b-vhpfw_kube-system(3c7d875a-5a68-4ecb-99ef-aae1e85751fb)" error: rpc error: code = Unknown desc = Error: No such container: 4699b3c19ed8a3e028f67bbbcdf76ff565e076719fcdcfa9e4ff15af35259479
	* Mar 10 21:25:00 newest-cni-20210310205436-6496 kubelet[6163]: E0310 21:25:00.832846    6163 cni.go:366] Error adding kube-system_coredns-74ff55c5b-vhpfw/ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe to network bridge/crio: failed to set bridge addr: could not add IP address to "cni0": permission denied
	* Mar 10 21:25:14 newest-cni-20210310205436-6496 kubelet[6163]: E0310 21:25:14.803307    6163 cni.go:387] Error deleting kube-system_coredns-74ff55c5b-vhpfw/ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe from network bridge/crio: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.5 -j CNI-0608d4555585de4ed8ba4ed1 -m comment --comment name: "crio" id: "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-0608d4555585de4ed8ba4ed1':No such file or directory
	* Mar 10 21:25:14 newest-cni-20210310205436-6496 kubelet[6163]: Try `iptables -h' or 'iptables --help' for more information.
	* Mar 10 21:25:21 newest-cni-20210310205436-6496 kubelet[6163]: E0310 21:25:21.575776    6163 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" network for pod "coredns-74ff55c5b-vhpfw": networkPlugin cni failed to set up pod "coredns-74ff55c5b-vhpfw_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" network for pod "coredns-74ff55c5b-vhpfw": networkPlugin cni failed to teardown pod "coredns-74ff55c5b-vhpfw_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.5 -j CNI-0608d4555585de4ed8ba4ed1 -m comment --comment name: "crio" id: "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `C
NI-0608d4555585de4ed8ba4ed1':No such file or directory
	* Mar 10 21:25:21 newest-cni-20210310205436-6496 kubelet[6163]: Try `iptables -h' or 'iptables --help' for more information.
	* Mar 10 21:25:21 newest-cni-20210310205436-6496 kubelet[6163]: ]
	* Mar 10 21:25:21 newest-cni-20210310205436-6496 kubelet[6163]: E0310 21:25:21.577653    6163 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-vhpfw_kube-system(3c7d875a-5a68-4ecb-99ef-aae1e85751fb)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" network for pod "coredns-74ff55c5b-vhpfw": networkPlugin cni failed to set up pod "coredns-74ff55c5b-vhpfw_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" network for pod "coredns-74ff55c5b-vhpfw": networkPlugin cni failed to teardown pod "coredns-74ff55c5b-vhpfw_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.5 -j CNI-0608d4555585de4ed8ba4ed1 -m comment --comment name: "crio" id: "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" --w
ait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-0608d4555585de4ed8ba4ed1':No such file or directory
	* Mar 10 21:25:21 newest-cni-20210310205436-6496 kubelet[6163]: Try `iptables -h' or 'iptables --help' for more information.
	* Mar 10 21:25:21 newest-cni-20210310205436-6496 kubelet[6163]: ]
	* Mar 10 21:25:21 newest-cni-20210310205436-6496 kubelet[6163]: E0310 21:25:21.590989    6163 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-vhpfw_kube-system(3c7d875a-5a68-4ecb-99ef-aae1e85751fb)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" network for pod "coredns-74ff55c5b-vhpfw": networkPlugin cni failed to set up pod "coredns-74ff55c5b-vhpfw_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" network for pod "coredns-74ff55c5b-vhpfw": networkPlugin cni failed to teardown pod "coredns-74ff55c5b-vhpfw_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.5 -j CNI-0608d4555585de4ed8ba4ed1 -m comment --comment name: "crio" id: "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" --
wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-0608d4555585de4ed8ba4ed1':No such file or directory
	* Mar 10 21:25:21 newest-cni-20210310205436-6496 kubelet[6163]: Try `iptables -h' or 'iptables --help' for more information.
	* Mar 10 21:25:21 newest-cni-20210310205436-6496 kubelet[6163]: ]
	* Mar 10 21:25:21 newest-cni-20210310205436-6496 kubelet[6163]: E0310 21:25:21.591685    6163 pod_workers.go:191] Error syncing pod 3c7d875a-5a68-4ecb-99ef-aae1e85751fb ("coredns-74ff55c5b-vhpfw_kube-system(3c7d875a-5a68-4ecb-99ef-aae1e85751fb)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-vhpfw_kube-system(3c7d875a-5a68-4ecb-99ef-aae1e85751fb)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-vhpfw_kube-system(3c7d875a-5a68-4ecb-99ef-aae1e85751fb)\" failed: rpc error: code = Unknown desc = [failed to set up sandbox container \"ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe\" network for pod \"coredns-74ff55c5b-vhpfw\": networkPlugin cni failed to set up pod \"coredns-74ff55c5b-vhpfw_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe\" network for pod \"coredns-74ff55c5b-vhpfw\": ne
tworkPlugin cni failed to teardown pod \"coredns-74ff55c5b-vhpfw_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.5 -j CNI-0608d4555585de4ed8ba4ed1 -m comment --comment name: \"crio\" id: \"ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-0608d4555585de4ed8ba4ed1':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	* Mar 10 21:25:23 newest-cni-20210310205436-6496 kubelet[6163]: W0310 21:25:23.634055    6163 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-74ff55c5b-vhpfw_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe"
	* Mar 10 21:25:28 newest-cni-20210310205436-6496 kubelet[6163]: W0310 21:25:28.958556    6163 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe"
	* Mar 10 21:25:29 newest-cni-20210310205436-6496 kubelet[6163]: W0310 21:25:29.825904    6163 pod_container_deletor.go:79] Container "ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe" not found in pod's containers
	* Mar 10 21:25:39 newest-cni-20210310205436-6496 kubelet[6163]: E0310 21:25:39.765479    6163 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/docker/e430f48f8334e2f39cce27878f96427388bf7a8d81e500242401e12e2e0754e7/kubepods/burstable/pod3c7d875a-5a68-4ecb-99ef-aae1e85751fb/ae1304e0a077bdc05c70c7d3f6e683346eb1e4c4087308f8a4a7f6e753d104fe": RecentStats: unable to find data in memory cache]
	* Mar 10 21:26:32 newest-cni-20210310205436-6496 kubelet[6163]: I0310 21:26:32.236339    6163 trace.go:205] Trace[1812183098]: "iptables Monitor CANARY check" (10-Mar-2021 21:26:29.780) (total time: 2456ms):
	* Mar 10 21:26:32 newest-cni-20210310205436-6496 kubelet[6163]: Trace[1812183098]: [2.4561234s] [2.4561234s] END
	* 
	* ==> storage-provisioner [9f1d686b291b] <==
	* I0310 21:24:00.892938       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 21:24:02.952577       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 21:24:02.952727       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 21:24:20.579741       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 21:24:20.656629       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20210310205436-6496_4616cf89-735f-4de8-a3b8-0cd6772d9968!
	* I0310 21:24:20.658283       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1892a849-e057-479b-87d5-bb537968385d", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20210310205436-6496_4616cf89-735f-4de8-a3b8-0cd6772d9968 became leader
	* I0310 21:24:27.061564       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20210310205436-6496_4616cf89-735f-4de8-a3b8-0cd6772d9968!
	* 
	* ==> storage-provisioner [ea08ce5b5547] <==
	* I0310 21:21:57.048741       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 21:21:58.072138       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 21:21:58.072294       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 21:22:15.570365       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 21:22:15.621400       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1892a849-e057-479b-87d5-bb537968385d", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20210310205436-6496_6e67437c-64a0-44b0-8805-a22f68eaaea8 became leader
	* I0310 21:22:15.621630       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20210310205436-6496_6e67437c-64a0-44b0-8805-a22f68eaaea8!
	* I0310 21:22:16.272563       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20210310205436-6496_6e67437c-64a0-44b0-8805-a22f68eaaea8!
	* I0310 21:23:03.227699       1 leaderelection.go:288] failed to renew lease kube-system/k8s.io-minikube-hostpath: failed to tryAcquireOrRenew context deadline exceeded
	* F0310 21:23:03.227783       1 controller.go:877] leaderelection lost
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                      Args                      |                    Profile                     |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p      | cert-options-20210310203249-6496               | cert-options-20210310203249-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:50:36 GMT | Wed, 10 Mar 2021 20:50:43 GMT |
	|         | ssh openssl x509 -text -noout -in              |                                                |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt          |                                                |                         |         |                               |                               |
	| delete  | -p                                             | cert-options-20210310203249-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:10 GMT | Wed, 10 Mar 2021 20:51:56 GMT |
	|         | cert-options-20210310203249-6496               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | disable-driver-mounts-20210310205156-6496      | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:57 GMT | Wed, 10 Mar 2021 20:52:02 GMT |
	|         | disable-driver-mounts-20210310205156-6496      |                                                |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210310203447-6496         | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:53:03 GMT | Wed, 10 Mar 2021 20:53:44 GMT |
	|         | ssh docker info --format                       |                                                |                         |         |                               |                               |
	|         |                               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:54:07 GMT | Wed, 10 Mar 2021 20:54:36 GMT |
	|         | force-systemd-flag-20210310203447-6496         |                                                |                         |         |                               |                               |
	| stop    | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:19 GMT | Wed, 10 Mar 2021 21:02:40 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:42 GMT | Wed, 10 Mar 2021 21:02:42 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496                | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:07:05 GMT | Wed, 10 Mar 2021 21:08:33 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| start   | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:52:21 GMT | Wed, 10 Mar 2021 21:09:23 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                |                                                |                         |         |                               |                               |
	|         | -v=1 --driver=docker                           |                                                |                         |         |                               |                               |
	| logs    | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:09:23 GMT | Wed, 10 Mar 2021 21:10:51 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:10:52 GMT | Wed, 10 Mar 2021 21:11:13 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | running-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:45 GMT | Wed, 10 Mar 2021 21:12:11 GMT |
	|         | running-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| stop    | -p                                             | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:12:38 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:40 GMT | Wed, 10 Mar 2021 21:12:41 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	| -p      | kubernetes-upgrade-20210310201637-6496         | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:50 GMT | Wed, 10 Mar 2021 21:15:02 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:15 GMT | Wed, 10 Mar 2021 21:15:46 GMT |
	|         | kubernetes-upgrade-20210310201637-6496         |                                                |                         |         |                               |                               |
	| delete  | -p                                             | missing-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:38 GMT | Wed, 10 Mar 2021 21:16:03 GMT |
	|         | missing-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| -p      | default-k8s-different-port-20210310205202-6496 | default-k8s-different-port-20210310205202-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:16:15 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| stop    | -p                                             | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:57 GMT | Wed, 10 Mar 2021 21:16:31 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:16:33 GMT | Wed, 10 Mar 2021 21:16:34 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	| delete  | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:18:53 GMT | Wed, 10 Mar 2021 21:19:16 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:20:59 GMT | Wed, 10 Mar 2021 21:21:26 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496                | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:20:59 GMT | Wed, 10 Mar 2021 21:24:36 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:24:57 GMT | Wed, 10 Mar 2021 21:25:18 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	| -p      | default-k8s-different-port-20210310205202-6496 | default-k8s-different-port-20210310205202-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:23:55 GMT | Wed, 10 Mar 2021 21:25:58 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 21:25:19
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 21:25:19.133084    9020 out.go:239] Setting OutFile to fd 1756 ...
	* I0310 21:25:19.135222    9020 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:25:19.135222    9020 out.go:252] Setting ErrFile to fd 1852...
	* I0310 21:25:19.135222    9020 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:25:19.152022    9020 out.go:246] Setting JSON to false
	* I0310 21:25:19.156890    9020 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36985,"bootTime":1615374534,"procs":116,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 21:25:19.157501    9020 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 21:25:19.172510    9020 out.go:129] * [kindnet-20210310212518-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 21:25:19.175856    9020 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 21:25:19.182537    9020 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 21:25:19.743858    9020 docker.go:119] docker version: linux-20.10.2
	* I0310 21:25:19.752225    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:25:20.728372    9020 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:25:20.2965852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:25:20.737118    9020 out.go:129] * Using the docker driver based on user configuration
	* I0310 21:25:20.737414    9020 start.go:276] selected driver: docker
	* I0310 21:25:20.737414    9020 start.go:718] validating driver "docker" against <nil>
	* I0310 21:25:20.737414    9020 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 21:25:21.867711    9020 out.go:129] 
	* W0310 21:25:21.867711    9020 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	* W0310 21:25:21.868710    9020 out.go:191] * Suggestion: 
	* 
	*     1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	*     2. Click "Settings"
	*     3. Click "Resources"
	*     4. Increase "Memory" slider bar to 2.25 GB or higher
	*     5. Click "Apply & Restart"
	* W0310 21:25:21.868710    9020 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* I0310 21:25:21.872840    9020 out.go:129] 
	* I0310 21:25:21.896212    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:25:22.825433    9020 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:25:22.4517093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:25:22.826095    9020 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 21:25:22.827624    9020 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 21:25:22.831822    9020 cni.go:74] Creating CNI manager for "kindnet"
	* I0310 21:25:22.831822    9020 start_flags.go:393] Found "CNI" CNI - setting NetworkPlugin=cni
	* I0310 21:25:22.832091    9020 start_flags.go:398] config:
	* {Name:kindnet-20210310212518-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kindnet-20210310212518-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:25:22.835545    9020 out.go:129] * Starting control plane node kindnet-20210310212518-6496 in cluster kindnet-20210310212518-6496
	* I0310 21:25:23.464130    9020 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 21:25:23.464623    9020 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 21:25:23.464984    9020 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:25:23.465690    9020 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:25:23.466115    9020 cache.go:54] Caching tarball of preloaded images
	* I0310 21:25:23.466480    9020 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 21:25:23.466693    9020 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 21:25:23.467119    9020 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\config.json ...
	* I0310 21:25:23.467579    9020 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\config.json: {Name:mk7fa7cd396dbde3f1faddb1c3bfe41f85b8368d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:25:23.484672    9020 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 21:25:23.485524    9020 start.go:313] acquiring machines lock for kindnet-20210310212518-6496: {Name:mkbdbdc880a7102685c8f1577e1afbb64ac3b053 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:25:23.485953    9020 start.go:317] acquired machines lock for "kindnet-20210310212518-6496" in 429.1??s
	* I0310 21:25:23.486191    9020 start.go:89] Provisioning new machine with config: &{Name:kindnet-20210310212518-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kindnet-20210310212518-6496 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 21:25:23.486492    9020 start.go:126] createHost starting for "" (driver="docker")
	* I0310 21:25:23.490814    9020 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	* I0310 21:25:23.491566    9020 start.go:160] libmachine.API.Create for "kindnet-20210310212518-6496" (driver="docker")
	* I0310 21:25:23.497266    9020 client.go:168] LocalClient.Create starting
	* I0310 21:25:23.497266    9020 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	* I0310 21:25:23.497266    9020 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:25:23.498271    9020 main.go:121] libmachine: Parsing certificate...
	* I0310 21:25:23.498271    9020 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	* I0310 21:25:23.498271    9020 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:25:23.498271    9020 main.go:121] libmachine: Parsing certificate...
	* I0310 21:25:23.531644    9020 cli_runner.go:115] Run: docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 21:25:24.155279    9020 cli_runner.go:162] docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 21:25:24.171193    9020 network_create.go:240] running [docker network inspect kindnet-20210310212518-6496] to gather additional debugging logs...
	* I0310 21:25:24.171193    9020 cli_runner.go:115] Run: docker network inspect kindnet-20210310212518-6496
	* W0310 21:25:24.771642    9020 cli_runner.go:162] docker network inspect kindnet-20210310212518-6496 returned with exit code 1
	* I0310 21:25:24.771642    9020 network_create.go:243] error running [docker network inspect kindnet-20210310212518-6496]: docker network inspect kindnet-20210310212518-6496: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: kindnet-20210310212518-6496
	* I0310 21:25:24.771642    9020 network_create.go:245] output of [docker network inspect kindnet-20210310212518-6496]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: kindnet-20210310212518-6496
	* 
	* ** /stderr **
	* I0310 21:25:24.783530    9020 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 21:25:25.405735    9020 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 21:25:25.405735    9020 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: kindnet-20210310212518-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 21:25:25.413187    9020 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210310212518-6496
	* I0310 21:25:26.454282    9020 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210310212518-6496: (1.0407404s)
	* I0310 21:25:26.455324    9020 kic.go:102] calculated static IP "192.168.49.97" for the "kindnet-20210310212518-6496" container
	* I0310 21:25:26.478701    9020 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 21:25:27.093079    9020 cli_runner.go:115] Run: docker volume create kindnet-20210310212518-6496 --label name.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --label created_by.minikube.sigs.k8s.io=true
	* I0310 21:25:27.778804    9020 oci.go:102] Successfully created a docker volume kindnet-20210310212518-6496
	* I0310 21:25:27.781396    9020 cli_runner.go:115] Run: docker run --rm --name kindnet-20210310212518-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --entrypoint /usr/bin/test -v kindnet-20210310212518-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 21:25:32.576233    9020 cli_runner.go:168] Completed: docker run --rm --name kindnet-20210310212518-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --entrypoint /usr/bin/test -v kindnet-20210310212518-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (4.7948676s)
	* I0310 21:25:32.576802    9020 oci.go:106] Successfully prepared a docker volume kindnet-20210310212518-6496
	* I0310 21:25:32.576802    9020 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:25:32.577293    9020 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:25:32.577293    9020 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 21:25:32.584923    9020 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210310212518-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* I0310 21:25:32.584923    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* W0310 21:25:33.297424    9020 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210310212518-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	* I0310 21:25:33.297424    9020 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210310212518-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	* stdout:
	* 
	* stderr:
	* docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	* 
	* The notification platform is unavailable.
	* 	���
	* 
	* ���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	*    at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	* �������?8
	* CreateToastNotifier
	* Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	* Windows.UI.Notifications.ToastNotificationManager
	* Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	* ���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	* ���+The notification platform is unavailable.
	* 	������������RestrictedErrorReference
	* 	
���
���������RestrictedCapabilitySid
	* 	������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	* See 'docker run --help'.
	* I0310 21:25:33.678447    9020 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0935307s)
	* I0310 21:25:33.679066    9020 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:25:33.1546865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:25:33.693718    9020 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 21:25:34.850661    9020 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1569497s)
	* I0310 21:25:34.863151    9020 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20210310212518-6496 --name kindnet-20210310212518-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --network kindnet-20210310212518-6496 --ip 192.168.49.97 --volume kindnet-20210310212518-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 21:25:38.433874    9020 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20210310212518-6496 --name kindnet-20210310212518-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --network kindnet-20210310212518-6496 --ip 192.168.49.97 --volume kindnet-20210310212518-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (3.570038s)
	* I0310 21:25:38.445063    9020 cli_runner.go:115] Run: docker container inspect kindnet-20210310212518-6496 --format=
	* I0310 21:25:39.083330    9020 cli_runner.go:115] Run: docker container inspect kindnet-20210310212518-6496 --format=
	* I0310 21:25:39.699176    9020 cli_runner.go:115] Run: docker exec kindnet-20210310212518-6496 stat /var/lib/dpkg/alternatives/iptables
	* I0310 21:25:40.825971    9020 cli_runner.go:168] Completed: docker exec kindnet-20210310212518-6496 stat /var/lib/dpkg/alternatives/iptables: (1.1263832s)
	* I0310 21:25:40.825971    9020 oci.go:278] the created container "kindnet-20210310212518-6496" has a running status.
	* I0310 21:25:40.825971    9020 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa...
	* I0310 21:25:40.987468    9020 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 21:25:42.006890    9020 cli_runner.go:115] Run: docker container inspect kindnet-20210310212518-6496 --format=
	* I0310 21:25:42.643577    9020 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	* I0310 21:25:42.643577    9020 kic_runner.go:115] Args: [docker exec --privileged kindnet-20210310212518-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	* I0310 21:25:43.646440    9020 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa...
	* I0310 21:25:46.096020   16712 out.go:150]   - Configuring RBAC rules ...
	* I0310 21:25:44.460620    9020 cli_runner.go:115] Run: docker container inspect kindnet-20210310212518-6496 --format=
	* I0310 21:25:45.118327    9020 machine.go:88] provisioning docker machine ...
	* I0310 21:25:45.118662    9020 ubuntu.go:169] provisioning hostname "kindnet-20210310212518-6496"
	* I0310 21:25:45.127519    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:45.733245    9020 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:25:45.739492    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	* I0310 21:25:45.739492    9020 main.go:121] libmachine: About to run SSH command:
	* sudo hostname kindnet-20210310212518-6496 && echo "kindnet-20210310212518-6496" | sudo tee /etc/hostname
	* I0310 21:25:46.892778    9020 main.go:121] libmachine: SSH cmd err, output: <nil>: kindnet-20210310212518-6496
	* 
	* I0310 21:25:46.901274    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:47.510414    9020 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:25:47.513743    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	* I0310 21:25:47.513997    9020 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\skindnet-20210310212518-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20210310212518-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 kindnet-20210310212518-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:25:48.107941    9020 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:25:48.107941    9020 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:25:48.107941    9020 ubuntu.go:177] setting up certificates
	* I0310 21:25:48.107941    9020 provision.go:83] configureAuth start
	* I0310 21:25:48.116027    9020 cli_runner.go:115] Run: docker container inspect -f "" kindnet-20210310212518-6496
	* I0310 21:25:48.708691    9020 provision.go:137] copyHostCerts
	* I0310 21:25:48.709980    9020 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:25:48.709980    9020 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:25:48.710571    9020 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:25:48.713950    9020 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:25:48.713950    9020 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:25:48.713950    9020 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:25:48.716125    9020 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:25:48.716125    9020 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:25:48.717581    9020 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:25:48.720461    9020 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.kindnet-20210310212518-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20210310212518-6496]
	* I0310 21:25:49.098088    9020 provision.go:165] copyRemoteCerts
	* I0310 21:25:49.109248    9020 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:25:49.116427    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:49.653578    9020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55213 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa Username:docker}
	* I0310 21:25:50.174980    9020 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.0657385s)
	* I0310 21:25:50.176116    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	* I0310 21:25:50.666393    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:25:51.152917    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	* I0310 21:25:51.524341    9020 provision.go:86] duration metric: configureAuth took 3.4164203s
	* I0310 21:25:51.524341    9020 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:25:51.535902    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:52.068791    9020 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:25:52.069183    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	* I0310 21:25:52.069183    9020 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:25:52.713252    9020 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:25:52.713449    9020 ubuntu.go:71] root file system type: overlay
	* I0310 21:25:52.714163    9020 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:25:52.714988    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:53.237040    9020 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:25:53.237493    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	* I0310 21:25:53.238129    9020 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:25:54.130278    9020 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:25:54.137933    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:54.719654    9020 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:25:54.720169    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	* I0310 21:25:54.720399    9020 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 21:25:58.900155    7648 out.go:150]   - Generating certificates and keys ...
	* I0310 21:25:58.922169    7648 out.go:150]   - Booting up control plane ...
	* I0310 21:25:58.929431    7648 out.go:150]   - Configuring RBAC rules ...
	* I0310 21:25:59.029333    7648 cni.go:74] Creating CNI manager for "cilium"
	* I0310 21:25:59.041317    7648 out.go:129] * Configuring Cilium (Container Networking Interface) ...
	* I0310 21:25:59.055985    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	* I0310 21:26:00.880834    7648 ssh_runner.go:189] Completed: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf": (1.8246791s)
	* I0310 21:26:00.885066    7648 cni.go:160] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
	* I0310 21:26:00.885066    7648 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (18465 bytes)
	* I0310 21:26:02.702431    7648 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	* I0310 21:26:05.662041    9020 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 21:25:54.118481000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 21:26:05.662412    9020 machine.go:91] provisioned docker machine in 20.5442044s
	* I0310 21:26:05.662412    9020 client.go:171] LocalClient.Create took 42.1653993s
	* I0310 21:26:05.662412    9020 start.go:168] duration metric: libmachine.API.Create for "kindnet-20210310212518-6496" took 42.1710991s
	* I0310 21:26:05.662730    9020 start.go:267] post-start starting for "kindnet-20210310212518-6496" (driver="docker")
	* I0310 21:26:05.662730    9020 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:26:05.676598    9020 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:26:05.691506    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:26:06.280728    9020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55213 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa Username:docker}
	* I0310 21:26:06.726049    9020 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0494567s)
	* I0310 21:26:06.735883    9020 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:26:06.807323    9020 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:26:06.807323    9020 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:26:06.808117    9020 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:26:06.808117    9020 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:26:06.808117    9020 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:26:06.808617    9020 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:26:06.811728    9020 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:26:06.813720    9020 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:26:06.831013    9020 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:26:06.941863    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:26:07.218906    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:26:07.657114    9020 start.go:270] post-start completed in 1.9943954s
	* I0310 21:26:07.695521    9020 cli_runner.go:115] Run: docker container inspect -f "" kindnet-20210310212518-6496
	* I0310 21:26:08.340486    9020 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\config.json ...
	* I0310 21:26:08.388865    9020 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:26:08.394033    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:26:09.106917    9020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55213 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa Username:docker}
	* I0310 21:26:09.523088    9020 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1335726s)
	* I0310 21:26:09.523088    9020 start.go:129] duration metric: createHost completed in 46.0368705s
	* I0310 21:26:09.523088    9020 start.go:80] releasing machines lock for "kindnet-20210310212518-6496", held for 46.0374102s
	* I0310 21:26:09.531213    9020 cli_runner.go:115] Run: docker container inspect -f "" kindnet-20210310212518-6496
	* I0310 21:26:10.130999    9020 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:26:10.138129    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:26:10.153462    9020 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:26:10.170494    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:26:10.723975    9020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55213 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa Username:docker}
	* I0310 21:26:10.736675    9020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55213 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa Username:docker}
	* I0310 21:26:11.472444    9020 ssh_runner.go:189] Completed: systemctl --version: (1.3189889s)
	* I0310 21:26:11.483544    9020 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3525522s)
	* I0310 21:26:11.484141    9020 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:26:11.610073    9020 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:26:11.724755    9020 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:26:11.756221    9020 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:26:11.857511    9020 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:26:12.101385    9020 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:26:12.215652    9020 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:26:13.295072    9020 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0794259s)
	* I0310 21:26:13.306393    9020 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:26:13.407988    9020 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:26:16.708840   16712 cni.go:74] Creating CNI manager for "calico"
	* I0310 21:26:16.712164   16712 out.go:129] * Configuring Calico (Container Networking Interface) ...
	* I0310 21:26:16.712493   16712 cni.go:160] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
	* I0310 21:26:16.712493   16712 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22544 bytes)
	* I0310 21:26:14.115986    9020 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 21:26:14.123723    9020 cli_runner.go:115] Run: docker exec -t kindnet-20210310212518-6496 dig +short host.docker.internal
	* I0310 21:26:15.265920    9020 cli_runner.go:168] Completed: docker exec -t kindnet-20210310212518-6496 dig +short host.docker.internal: (1.1422031s)
	* I0310 21:26:15.266390    9020 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:26:15.276010    9020 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:26:15.318672    9020 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:26:15.395379    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:26:15.978322    9020 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\client.crt
	* I0310 21:26:15.982109    9020 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\client.key
	* I0310 21:26:15.986380    9020 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:26:15.986917    9020 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:26:16.002777    9020 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:26:16.552477    9020 docker.go:423] Got preloaded images: 
	* I0310 21:26:16.553151    9020 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	* I0310 21:26:16.564993    9020 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:26:16.654115    9020 ssh_runner.go:149] Run: which lz4
	* I0310 21:26:16.695167    9020 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 21:26:16.732557    9020 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 21:26:16.733554    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	* I0310 21:26:19.445919   16712 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	* I0310 21:27:04.839019   12868 out.go:150]   - Generating certificates and keys ...
	* I0310 21:27:04.845001   12868 out.go:150]   - Booting up control plane ...
	* I0310 21:27:04.848370   12868 kubeadm.go:387] StartCluster complete in 10m50.1895672s
	* I0310 21:27:04.872853   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format=
	* I0310 21:27:19.410926   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format=: (14.537744s)
	* I0310 21:27:19.411429   12868 logs.go:255] 1 containers: [cc2004a03eb1]
	* I0310 21:27:19.424217   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format=
	* I0310 21:27:30.736484    9020 docker.go:388] Took 74.052978 seconds to copy over tarball
	* I0310 21:27:30.750697    9020 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	* I0310 21:27:34.708337   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format=: (15.2841878s)
	* I0310 21:27:34.708337   12868 logs.go:255] 1 containers: [e2b3a62f4f6c]
	* I0310 21:27:34.709872   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format=

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:26:22.838211   22012 out.go:340] unable to execute * 2021-03-10 21:26:01.623866 W | etcdserver: request "header:<ID:912955419576509837 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:0cab781df7a0258c>" with result "size:40" took too long (133.6854ms) to execute
	: html/template:* 2021-03-10 21:26:01.623866 W | etcdserver: request "header:<ID:912955419576509837 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:0cab781df7a0258c>" with result "size:40" took too long (133.6854ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:27:38.375703   22012 out.go:335] unable to parse "* I0310 21:25:19.752225    9020 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:25:19.752225    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:27:38.454182   22012 out.go:335] unable to parse "* I0310 21:25:21.896212    9020 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:25:21.896212    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:27:38.610341   22012 out.go:340] unable to execute * I0310 21:25:23.531644    9020 cli_runner.go:115] Run: docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:25:23.531644    9020 cli_runner.go:115] Run: docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:283: executing "* I0310 21:25:23.531644    9020 cli_runner.go:115] Run: docker network inspect kindnet-20210310212518-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:38.618691   22012 out.go:340] unable to execute * W0310 21:25:24.155279    9020 cli_runner.go:162] docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 21:25:24.155279    9020 cli_runner.go:162] docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:278: executing "* W0310 21:25:24.155279    9020 cli_runner.go:162] docker network inspect kindnet-20210310212518-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:38.683556   22012 out.go:340] unable to execute * I0310 21:25:24.783530    9020 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:25:24.783530    9020 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 21:25:24.783530    9020 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:38.757111   22012 out.go:335] unable to parse "* I0310 21:25:32.584923    9020 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:25:32.584923    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:27:38.968903   22012 out.go:335] unable to parse "* I0310 21:25:33.678447    9020 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0935307s)\n": template: * I0310 21:25:33.678447    9020 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0935307s)
	:1: function "json" not defined - returning raw string.
	E0310 21:27:38.979311   22012 out.go:335] unable to parse "* I0310 21:25:33.693718    9020 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 21:25:33.693718    9020 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 21:27:38.986859   22012 out.go:335] unable to parse "* I0310 21:25:34.850661    9020 cli_runner.go:168] Completed: docker info --format \"'{{json .SecurityOptions}}'\": (1.1569497s)\n": template: * I0310 21:25:34.850661    9020 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1569497s)
	:1: function "json" not defined - returning raw string.
	E0310 21:27:39.052533   22012 out.go:340] unable to execute * I0310 21:25:45.127519    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:45.127519    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:45.127519    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:39.066541   22012 out.go:335] unable to parse "* I0310 21:25:45.739492    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}\n": template: * I0310 21:25:45.739492    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:27:39.086945   22012 out.go:340] unable to execute * I0310 21:25:46.901274    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:46.901274    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:46.901274    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:39.091874   22012 out.go:335] unable to parse "* I0310 21:25:47.513743    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}\n": template: * I0310 21:25:47.513743    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:27:39.206610   22012 out.go:340] unable to execute * I0310 21:25:49.116427    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:49.116427    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:49.116427    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:39.248341   22012 out.go:340] unable to execute * I0310 21:25:51.535902    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:51.535902    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:51.535902    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:39.258528   22012 out.go:335] unable to parse "* I0310 21:25:52.069183    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}\n": template: * I0310 21:25:52.069183    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:27:39.280439   22012 out.go:340] unable to execute * I0310 21:25:52.714988    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:52.714988    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:52.714988    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:39.294105   22012 out.go:335] unable to parse "* I0310 21:25:53.237493    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}\n": template: * I0310 21:25:53.237493    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:27:39.631861   22012 out.go:340] unable to execute * I0310 21:25:54.137933    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:54.137933    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:54.137933    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:39.641943   22012 out.go:335] unable to parse "* I0310 21:25:54.720169    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}\n": template: * I0310 21:25:54.720169    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:27:39.965165   22012 out.go:340] unable to execute * I0310 21:26:05.691506    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:26:05.691506    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:26:05.691506    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:40.050261   22012 out.go:340] unable to execute * I0310 21:26:08.394033    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:26:08.394033    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:26:08.394033    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:40.079819   22012 out.go:340] unable to execute * I0310 21:26:10.138129    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:26:10.138129    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:26:10.138129    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:40.097652   22012 out.go:340] unable to execute * I0310 21:26:10.170494    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:26:10.170494    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:26:10.170494    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:27:40.196947   22012 out.go:340] unable to execute * I0310 21:26:15.395379    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:26:15.395379    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:26:15.395379    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210310205436-6496 -n newest-cni-20210310205436-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20210310205436-6496 -n newest-cni-20210310205436-6496: (12.2112387s)
helpers_test.go:257: (dbg) Run:  kubectl --context newest-cni-20210310205436-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:257: (dbg) Done: kubectl --context newest-cni-20210310205436-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.335679s)
helpers_test.go:263: non-running pods: coredns-74ff55c5b-vhpfw
helpers_test.go:265: ======> post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context newest-cni-20210310205436-6496 describe pod coredns-74ff55c5b-vhpfw
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context newest-cni-20210310205436-6496 describe pod coredns-74ff55c5b-vhpfw: exit status 1 (655.8835ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-74ff55c5b-vhpfw" not found

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context newest-cni-20210310205436-6496 describe pod coredns-74ff55c5b-vhpfw: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (1998.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context old-k8s-version-20210310204459-6496 create -f testdata\busybox.yaml
start_stop_delete_test.go:164: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210310204459-6496 create -f testdata\busybox.yaml: exit status 1 (226.0751ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20210310204459-6496" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:164: kubectl --context old-k8s-version-20210310204459-6496 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect old-k8s-version-20210310204459-6496
helpers_test.go:231: (dbg) docker inspect old-k8s-version-20210310204459-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419",
	        "Created": "2021-03-10T20:45:16.4180529Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:45:18.8406137Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/hostname",
	        "HostsPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/hosts",
	        "LogPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419-json.log",
	        "Name": "/old-k8s-version-20210310204459-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210310204459-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210310204459-6496",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210310204459-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210310204459-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210310204459-6496",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210310204459-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "662f69f6007bf2082ebf95584d957493637e9b0c1e109934b80acf5f0ff8e63d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55137"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55135"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/662f69f6007b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "46d89bd8b457de38f652bea1f5633541acb2c2620431fe89b1f183bf349b403b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "46d89bd8b457de38f652bea1f5633541acb2c2620431fe89b1f183bf349b403b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210310204459-6496 -n old-k8s-version-20210310204459-6496
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210310204459-6496 -n old-k8s-version-20210310204459-6496: exit status 4 (6.8708946s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:02:11.521729   22496 status.go:396] kubeconfig endpoint: extract IP: "old-k8s-version-20210310204459-6496" does not appear in C:\Users\jenkins/.kube/config

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 4 (may be ok)
helpers_test.go:237: "old-k8s-version-20210310204459-6496" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect old-k8s-version-20210310204459-6496
helpers_test.go:231: (dbg) docker inspect old-k8s-version-20210310204459-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419",
	        "Created": "2021-03-10T20:45:16.4180529Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:45:18.8406137Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/hostname",
	        "HostsPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/hosts",
	        "LogPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419-json.log",
	        "Name": "/old-k8s-version-20210310204459-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210310204459-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210310204459-6496",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210310204459-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210310204459-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210310204459-6496",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210310204459-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "662f69f6007bf2082ebf95584d957493637e9b0c1e109934b80acf5f0ff8e63d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55137"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55135"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/662f69f6007b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "46d89bd8b457de38f652bea1f5633541acb2c2620431fe89b1f183bf349b403b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "46d89bd8b457de38f652bea1f5633541acb2c2620431fe89b1f183bf349b403b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210310204459-6496 -n old-k8s-version-20210310204459-6496
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210310204459-6496 -n old-k8s-version-20210310204459-6496: exit status 4 (5.2143911s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:02:17.638444   12916 status.go:396] kubeconfig endpoint: extract IP: "old-k8s-version-20210310204459-6496" does not appear in C:\Users\jenkins/.kube/config

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 4 (may be ok)
helpers_test.go:237: "old-k8s-version-20210310204459-6496" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (970.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20210310204459-6496 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-20210310204459-6496 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0: exit status 1 (12m17.020385s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20210310204459-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20210310204459-6496 in cluster old-k8s-version-20210310204459-6496
	* Restarting existing docker container for "old-k8s-version-20210310204459-6496" ...
	* Preparing Kubernetes v1.14.0 on Docker 20.10.3 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 21:02:43.264809   11452 out.go:239] Setting OutFile to fd 1744 ...
	I0310 21:02:43.266895   11452 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:02:43.266895   11452 out.go:252] Setting ErrFile to fd 2864...
	I0310 21:02:43.266895   11452 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:02:43.283203   11452 out.go:246] Setting JSON to false
	I0310 21:02:43.286228   11452 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":35629,"bootTime":1615374534,"procs":120,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 21:02:43.286542   11452 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 21:02:43.292668   11452 out.go:129] * [old-k8s-version-20210310204459-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 21:02:43.316100   11452 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 21:02:43.322638   11452 out.go:129] * Kubernetes 1.20.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.20.2
	I0310 21:02:43.322811   11452 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 21:02:43.887335   11452 docker.go:119] docker version: linux-20.10.2
	I0310 21:02:43.896939   11452 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:02:44.995061   11452 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0981259s)
	I0310 21:02:44.996493   11452 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:110 OomKillDisable:true NGoroutines:87 SystemTime:2021-03-10 21:02:44.4693397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:02:45.000878   11452 out.go:129] * Using the docker driver based on existing profile
	I0310 21:02:45.001115   11452 start.go:276] selected driver: docker
	I0310 21:02:45.001115   11452 start.go:718] validating driver "docker" against &{Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:02:45.001649   11452 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 21:02:46.128750   11452 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:02:47.166236   11452 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0374899s)
	I0310 21:02:47.167142   11452 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:110 OomKillDisable:true NGoroutines:87 SystemTime:2021-03-10 21:02:46.7535197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[
] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:02:47.167814   11452 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 21:02:47.167814   11452 start_flags.go:398] config:
	{Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:02:47.171671   11452 out.go:129] * Starting control plane node old-k8s-version-20210310204459-6496 in cluster old-k8s-version-20210310204459-6496
	I0310 21:02:47.897608   11452 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 21:02:47.898273   11452 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 21:02:47.898273   11452 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0310 21:02:47.898886   11452 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0310 21:02:47.898886   11452 cache.go:54] Caching tarball of preloaded images
	I0310 21:02:47.898886   11452 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 21:02:47.899226   11452 cache.go:57] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0310 21:02:47.899577   11452 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json ...
	I0310 21:02:47.920625   11452 cache.go:185] Successfully downloaded all kic artifacts
	I0310 21:02:47.922963   11452 start.go:313] acquiring machines lock for old-k8s-version-20210310204459-6496: {Name:mk75b6b2b8c7e9551ee9b4fdfdcee0e639bfef0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:02:47.923887   11452 start.go:317] acquired machines lock for "old-k8s-version-20210310204459-6496" in 646.1??s
	I0310 21:02:47.924378   11452 start.go:93] Skipping create...Using existing machine configuration
	I0310 21:02:47.924963   11452 fix.go:55] fixHost starting: 
	I0310 21:02:48.531465   11452 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format={{.State.Status}}
	I0310 21:02:49.887205   11452 cli_runner.go:168] Completed: docker container inspect old-k8s-version-20210310204459-6496 --format={{.State.Status}}: (1.355744s)
	I0310 21:02:49.887677   11452 fix.go:108] recreateIfNeeded on old-k8s-version-20210310204459-6496: state=Stopped err=<nil>
	W0310 21:02:49.888201   11452 fix.go:134] unexpected machine state, will restart: <nil>
	I0310 21:02:49.894218   11452 out.go:129] * Restarting existing docker container for "old-k8s-version-20210310204459-6496" ...
	I0310 21:02:49.911713   11452 cli_runner.go:115] Run: docker start old-k8s-version-20210310204459-6496
	I0310 21:02:53.827575   11452 cli_runner.go:168] Completed: docker start old-k8s-version-20210310204459-6496: (3.9158732s)
	I0310 21:02:53.829882   11452 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210310204459-6496 --format={{.State.Status}}
	I0310 21:02:54.420999   11452 kic.go:410] container "old-k8s-version-20210310204459-6496" state is running.
	I0310 21:02:54.435196   11452 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210310204459-6496
	I0310 21:02:55.129411   11452 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\config.json ...
	I0310 21:02:55.134784   11452 machine.go:88] provisioning docker machine ...
	I0310 21:02:55.135219   11452 ubuntu.go:169] provisioning hostname "old-k8s-version-20210310204459-6496"
	I0310 21:02:55.142932   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:02:55.746356   11452 main.go:121] libmachine: Using SSH client type: native
	I0310 21:02:55.747197   11452 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55166 <nil> <nil>}
	I0310 21:02:55.747197   11452 main.go:121] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20210310204459-6496 && echo "old-k8s-version-20210310204459-6496" | sudo tee /etc/hostname
	I0310 21:02:55.756906   11452 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:02:58.769817   11452 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:03:03.521668   11452 main.go:121] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20210310204459-6496
	
	I0310 21:03:03.535938   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:03:04.161545   11452 main.go:121] libmachine: Using SSH client type: native
	I0310 21:03:04.162394   11452 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55166 <nil> <nil>}
	I0310 21:03:04.162394   11452 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20210310204459-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20210310204459-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20210310204459-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 21:03:04.873151   11452 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:03:04.873312   11452 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 21:03:04.873617   11452 ubuntu.go:177] setting up certificates
	I0310 21:03:04.873617   11452 provision.go:83] configureAuth start
	I0310 21:03:04.883045   11452 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210310204459-6496
	I0310 21:03:05.472121   11452 provision.go:137] copyHostCerts
	I0310 21:03:05.473737   11452 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 21:03:05.473886   11452 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 21:03:05.474772   11452 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 21:03:05.481829   11452 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 21:03:05.482064   11452 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 21:03:05.485743   11452 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 21:03:05.488203   11452 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 21:03:05.488203   11452 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 21:03:05.489574   11452 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 21:03:05.503792   11452 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-20210310204459-6496 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20210310204459-6496]
	I0310 21:03:05.705382   11452 provision.go:165] copyRemoteCerts
	I0310 21:03:05.718618   11452 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 21:03:05.725400   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:03:06.324648   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55166 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	I0310 21:03:06.687962   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1277 bytes)
	I0310 21:03:07.056255   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 21:03:07.579746   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 21:03:08.017529   11452 provision.go:86] duration metric: configureAuth took 3.1439202s
	I0310 21:03:08.017709   11452 ubuntu.go:193] setting minikube options for container-runtime
	I0310 21:03:08.029372   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:03:08.643804   11452 main.go:121] libmachine: Using SSH client type: native
	I0310 21:03:08.645188   11452 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55166 <nil> <nil>}
	I0310 21:03:08.645188   11452 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 21:03:09.313583   11452 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 21:03:09.313747   11452 ubuntu.go:71] root file system type: overlay
	I0310 21:03:09.314207   11452 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 21:03:09.323574   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:03:11.914040   11452 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496: (2.5903135s)
	I0310 21:03:11.922716   11452 main.go:121] libmachine: Using SSH client type: native
	I0310 21:03:11.923414   11452 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55166 <nil> <nil>}
	I0310 21:03:11.923638   11452 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 21:03:13.197441   11452 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 21:03:13.207634   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:03:14.318866   11452 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496: (1.1112346s)
	I0310 21:03:14.327808   11452 main.go:121] libmachine: Using SSH client type: native
	I0310 21:03:14.328713   11452 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55166 <nil> <nil>}
	I0310 21:03:14.328713   11452 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 21:03:16.832291   11452 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:03:16.832858   11452 machine.go:91] provisioned docker machine in 21.6981348s
	I0310 21:03:16.832858   11452 start.go:267] post-start starting for "old-k8s-version-20210310204459-6496" (driver="docker")
	I0310 21:03:16.832858   11452 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 21:03:16.842096   11452 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 21:03:16.848085   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:03:17.451043   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55166 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	I0310 21:03:18.175442   11452 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3333494s)
	I0310 21:03:18.188387   11452 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 21:03:18.220952   11452 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 21:03:18.221244   11452 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 21:03:18.221244   11452 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 21:03:18.221392   11452 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 21:03:18.221516   11452 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 21:03:18.222113   11452 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 21:03:18.225602   11452 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 21:03:18.227564   11452 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 21:03:18.238866   11452 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 21:03:18.300713   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 21:03:18.582529   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 21:03:18.842692   11452 start.go:270] post-start completed in 2.0098393s
	I0310 21:03:18.854151   11452 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 21:03:18.861209   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:03:19.470653   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55166 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	I0310 21:03:19.764828   11452 fix.go:57] fixHost completed within 31.8399545s
	I0310 21:03:19.764828   11452 start.go:80] releasing machines lock for "old-k8s-version-20210310204459-6496", held for 31.8408077s
	I0310 21:03:19.778450   11452 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20210310204459-6496
	I0310 21:03:20.368859   11452 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 21:03:20.376935   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:03:20.377913   11452 ssh_runner.go:149] Run: systemctl --version
	I0310 21:03:20.385676   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:03:20.982191   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55166 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	I0310 21:03:21.040538   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55166 SSHKeyPath:C:\Users\jenkins\.minikube\machines\old-k8s-version-20210310204459-6496\id_rsa Username:docker}
	I0310 21:03:21.286169   11452 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 21:03:21.555076   11452 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:03:22.413608   11452 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.0444299s)
	I0310 21:03:22.413890   11452 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 21:03:22.423475   11452 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 21:03:22.496262   11452 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 21:03:22.890041   11452 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:03:23.045726   11452 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:03:27.391668   11452 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (4.3454307s)
	I0310 21:03:27.401794   11452 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 21:03:27.493596   11452 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 21:03:28.113406   11452 out.go:150] * Preparing Kubernetes v1.14.0 on Docker 20.10.3 ...
	I0310 21:03:28.123032   11452 cli_runner.go:115] Run: docker exec -t old-k8s-version-20210310204459-6496 dig +short host.docker.internal
	I0310 21:03:29.346771   11452 cli_runner.go:168] Completed: docker exec -t old-k8s-version-20210310204459-6496 dig +short host.docker.internal: (1.2234175s)
	I0310 21:03:29.347666   11452 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 21:03:29.366844   11452 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 21:03:30.376452   11452 ssh_runner.go:189] Completed: grep 192.168.65.2	host.minikube.internal$ /etc/hosts: (1.0096105s)
	I0310 21:03:30.377911   11452 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:03:30.454774   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:03:31.085216   11452 preload.go:97] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0310 21:03:31.086239   11452 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0310 21:03:31.098580   11452 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:03:33.201505   11452 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (2.1029307s)
	I0310 21:03:33.201785   11452 docker.go:423] Got preloaded images: -- stdout --
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-proxy:v1.14.0
	k8s.gcr.io/kube-apiserver:v1.14.0
	k8s.gcr.io/kube-scheduler:v1.14.0
	k8s.gcr.io/kube-controller-manager:v1.14.0
	k8s.gcr.io/coredns:1.3.1
	k8s.gcr.io/etcd:3.3.10
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0310 21:03:33.201785   11452 docker.go:360] Images already preloaded, skipping extraction
	I0310 21:03:33.221003   11452 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:03:34.295748   11452 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.0747484s)
	I0310 21:03:34.296019   11452 docker.go:423] Got preloaded images: -- stdout --
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-proxy:v1.14.0
	k8s.gcr.io/kube-scheduler:v1.14.0
	k8s.gcr.io/kube-apiserver:v1.14.0
	k8s.gcr.io/kube-controller-manager:v1.14.0
	k8s.gcr.io/coredns:1.3.1
	k8s.gcr.io/etcd:3.3.10
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0310 21:03:34.296263   11452 cache_images.go:73] Images are preloaded, skipping loading
	I0310 21:03:34.315822   11452 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 21:03:36.658112   11452 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (2.3417237s)
	I0310 21:03:36.658112   11452 cni.go:74] Creating CNI manager for ""
	I0310 21:03:36.658112   11452 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:03:36.658112   11452 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 21:03:36.658112   11452 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.14.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20210310204459-6496 NodeName:old-k8s-version-20210310204459-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 21:03:36.658619   11452 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20210310204459-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20210310204459-6496
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://172.17.0.3:2381
	kubernetesVersion: v1.14.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 21:03:36.659160   11452 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.14.0/kubelet --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --client-ca-file=/var/lib/minikube/certs/ca.crt --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20210310204459-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 21:03:36.669047   11452 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.14.0
	I0310 21:03:36.841598   11452 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 21:03:36.860817   11452 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 21:03:37.132124   11452 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	I0310 21:03:37.362262   11452 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 21:03:37.537039   11452 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1928 bytes)
	I0310 21:03:38.149301   11452 ssh_runner.go:149] Run: grep 172.17.0.3	control-plane.minikube.internal$ /etc/hosts
	I0310 21:03:38.567644   11452 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:03:39.412886   11452 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496 for IP: 172.17.0.3
	I0310 21:03:39.413644   11452 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 21:03:39.414080   11452 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 21:03:39.414760   11452 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\client.key
	I0310 21:03:39.415130   11452 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key.0f3e66d0
	I0310 21:03:39.415530   11452 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key
	I0310 21:03:39.416877   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 21:03:39.417558   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.417895   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 21:03:39.418201   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.418201   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 21:03:39.418877   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.418877   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 21:03:39.419212   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.419525   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 21:03:39.419525   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.419832   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 21:03:39.420208   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.420208   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 21:03:39.420622   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.420622   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 21:03:39.421021   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.421334   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 21:03:39.421636   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.421636   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 21:03:39.421948   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.421948   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 21:03:39.422343   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.422343   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 21:03:39.422775   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.422775   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 21:03:39.423773   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.424160   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 21:03:39.424474   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.424474   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 21:03:39.424827   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.425183   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 21:03:39.425435   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.425663   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 21:03:39.426084   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.426350   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 21:03:39.426532   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.426801   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 21:03:39.427221   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.429261   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 21:03:39.430123   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.430123   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 21:03:39.430123   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.430947   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 21:03:39.430947   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.430947   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 21:03:39.431533   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.431772   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 21:03:39.431772   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.431772   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 21:03:39.432384   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.432655   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 21:03:39.432655   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.432655   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 21:03:39.432655   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.433551   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 21:03:39.433551   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.433551   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 21:03:39.433551   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.433551   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 21:03:39.434547   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.434547   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 21:03:39.434547   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.434547   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 21:03:39.434547   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.435552   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 21:03:39.435552   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.435552   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 21:03:39.435552   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.435552   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 21:03:39.436549   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.436549   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 21:03:39.436549   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.436549   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 21:03:39.436549   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.436549   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 21:03:39.437548   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.437548   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 21:03:39.437548   11452 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 21:03:39.437548   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 21:03:39.438548   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 21:03:39.438548   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 21:03:39.438548   11452 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 21:03:39.445549   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 21:03:40.111625   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 21:03:40.529786   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 21:03:41.398492   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\old-k8s-version-20210310204459-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0310 21:03:41.796926   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 21:03:42.581131   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 21:03:42.925478   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 21:03:43.186530   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 21:03:43.485642   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 21:03:43.868428   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 21:03:44.888199   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 21:03:45.355688   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 21:03:45.921519   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 21:03:46.389762   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 21:03:46.798915   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 21:03:47.226098   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 21:03:47.676694   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 21:03:48.220958   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 21:03:51.464547   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 21:05:08.189806   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 21:05:19.067553   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 21:05:19.895977   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 21:05:23.641884   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 21:05:24.046093   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 21:05:24.471958   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 21:05:24.749187   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 21:05:27.260844   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 21:05:27.543864   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 21:05:27.716771   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 21:05:27.918623   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 21:05:28.181555   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 21:05:28.461090   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 21:05:28.805540   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 21:05:29.059352   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 21:05:29.620244   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 21:05:30.064025   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 21:05:31.132617   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 21:05:31.392774   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 21:05:31.616474   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 21:05:31.875257   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 21:05:33.971082   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 21:05:35.324642   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 21:05:35.822875   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 21:05:36.731709   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 21:05:37.635481   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 21:05:37.892646   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 21:05:41.960003   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 21:05:47.123465   11452 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 21:05:51.998524   11452 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 21:05:54.251257   11452 ssh_runner.go:149] Run: openssl version
	I0310 21:05:55.261755   11452 ssh_runner.go:189] Completed: openssl version: (1.0101572s)
	I0310 21:05:55.279619   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 21:05:55.768074   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 21:05:55.832652   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 21:05:55.844761   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 21:05:56.270196   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 21:05:56.351559   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 21:05:57.769262   11452 ssh_runner.go:189] Completed: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem": (1.4177064s)
	I0310 21:05:57.782031   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 21:05:57.820411   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 21:05:57.833495   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 21:05:57.920338   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 21:05:58.107833   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 21:05:58.800544   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 21:05:58.837025   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 21:05:58.852421   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 21:05:58.949294   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 21:05:59.263094   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 21:05:59.856655   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 21:05:59.886650   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 21:05:59.899421   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 21:06:01.013167   11452 ssh_runner.go:189] Completed: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem: (1.1134407s)
	I0310 21:06:01.046508   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:01.385317   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 21:06:01.516384   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 21:06:02.245439   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 21:06:02.264369   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 21:06:03.239498   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:03.782465   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 21:06:03.916906   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 21:06:04.006906   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 21:06:04.025756   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 21:06:04.131190   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:04.235073   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 21:06:04.643519   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 21:06:04.696090   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 21:06:04.706163   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 21:06:04.755873   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:04.925362   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 21:06:05.054882   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 21:06:05.098648   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 21:06:05.109627   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 21:06:05.188034   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:05.620312   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 21:06:05.692718   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 21:06:05.723254   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 21:06:05.723884   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 21:06:05.810587   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:05.928690   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 21:06:06.029990   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 21:06:06.060589   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 21:06:06.070089   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 21:06:06.135034   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:06.207575   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 21:06:06.317406   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 21:06:06.345506   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 21:06:06.365442   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 21:06:06.834547   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:06.894563   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 21:06:06.963774   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 21:06:07.002831   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 21:06:07.016978   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 21:06:07.077535   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:07.135681   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 21:06:07.493485   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 21:06:07.518413   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 21:06:07.532648   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 21:06:07.584540   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:07.652683   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 21:06:08.124344   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 21:06:08.174203   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 21:06:08.175356   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 21:06:08.227354   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:08.290188   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 21:06:08.391500   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:06:08.425070   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:06:08.431008   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:06:08.492564   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 21:06:08.567692   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 21:06:08.657651   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 21:06:08.690537   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 21:06:08.702209   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 21:06:08.767620   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:08.863225   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 21:06:08.970369   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 21:06:09.018892   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 21:06:09.036605   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 21:06:09.141576   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:09.362071   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 21:06:09.481488   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 21:06:09.516749   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 21:06:09.527179   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 21:06:09.596005   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:10.025366   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 21:06:10.138944   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 21:06:10.168270   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 21:06:10.179323   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 21:06:10.295851   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:10.488753   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 21:06:10.578659   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 21:06:10.634061   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 21:06:10.639088   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 21:06:10.728551   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:10.797503   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 21:06:10.883485   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 21:06:10.929868   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 21:06:10.936388   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 21:06:11.005571   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:11.072442   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 21:06:11.140364   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 21:06:11.222480   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 21:06:11.261951   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 21:06:11.330820   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:11.446276   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 21:06:11.557634   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 21:06:11.585338   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 21:06:11.605158   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 21:06:11.658194   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:11.755323   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 21:06:11.846379   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 21:06:11.889703   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 21:06:11.901709   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 21:06:11.941603   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:12.760238   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 21:06:12.847101   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 21:06:12.896619   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 21:06:12.915894   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 21:06:12.959824   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:13.038060   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 21:06:13.185025   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 21:06:13.208725   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 21:06:13.229577   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 21:06:13.297364   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:13.395520   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 21:06:13.490069   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 21:06:13.527438   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 21:06:13.545229   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 21:06:13.597224   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:13.659703   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 21:06:13.782943   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 21:06:13.814964   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 21:06:13.835322   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 21:06:13.885215   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:13.985852   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 21:06:14.073418   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 21:06:14.106679   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 21:06:14.118695   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 21:06:14.175273   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:14.250057   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 21:06:14.359769   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 21:06:14.393975   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 21:06:14.408678   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 21:06:14.476335   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:14.555931   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 21:06:14.659196   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 21:06:14.708754   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 21:06:14.717719   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 21:06:14.775952   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:14.829853   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 21:06:14.913798   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 21:06:14.946669   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 21:06:14.959739   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 21:06:15.010276   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:15.074672   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 21:06:15.171114   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 21:06:15.212330   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 21:06:15.221510   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 21:06:15.268880   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:15.349209   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 21:06:15.419916   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 21:06:15.449779   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 21:06:15.476090   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 21:06:15.544371   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:15.646789   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 21:06:15.712004   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 21:06:15.737150   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 21:06:15.737517   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 21:06:15.792314   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:15.867517   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 21:06:15.933877   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 21:06:15.969734   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 21:06:15.980954   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 21:06:16.014064   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:16.092857   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 21:06:16.164695   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 21:06:16.202828   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 21:06:16.211988   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 21:06:16.268877   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:16.332560   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 21:06:16.448358   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 21:06:16.476083   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 21:06:16.478637   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 21:06:16.587109   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:16.657706   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 21:06:16.714861   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 21:06:16.741520   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 21:06:16.748263   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 21:06:16.807220   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:16.908044   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 21:06:16.989617   11452 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 21:06:17.030917   11452 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 21:06:17.046391   11452 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 21:06:17.096905   11452 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 21:06:17.149940   11452 kubeadm.go:385] StartCluster: {Name:old-k8s-version-20210310204459-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20210310204459-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:06:17.158943   11452 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:06:17.799241   11452 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 21:06:17.859667   11452 kubeadm.go:396] found existing configuration files, will attempt cluster restart
	I0310 21:06:17.859954   11452 kubeadm.go:594] restartCluster start
	I0310 21:06:17.869852   11452 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0310 21:06:17.952891   11452 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0310 21:06:17.962163   11452 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20210310204459-6496
	I0310 21:06:18.633230   11452 kubeconfig.go:117] verify returned: extract IP: "old-k8s-version-20210310204459-6496" does not appear in C:\Users\jenkins/.kube/config
	I0310 21:06:18.635119   11452 kubeconfig.go:128] "old-k8s-version-20210310204459-6496" context is missing from C:\Users\jenkins/.kube/config - will repair!
	I0310 21:06:18.638344   11452 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:06:18.704150   11452 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0310 21:06:18.804350   11452 api_server.go:146] Checking apiserver status ...
	I0310 21:06:18.815951   11452 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0310 21:06:18.947227   11452 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0310 21:06:18.947483   11452 kubeadm.go:573] needs reconfigure: apiserver in state Stopped
	I0310 21:06:18.947483   11452 kubeadm.go:1042] stopping kube-system containers ...
	I0310 21:06:18.955222   11452 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:06:19.642497   11452 docker.go:261] Stopping containers: [93d00a3ca57a 3fa6bb8a56ae a32eecb90b1b f6d5d44ee6e5 d960ab78b04e 8543b072b7ef 2e8f26a227cb c248621e24fa 88497c18555d 51b22bf15449]
	I0310 21:06:19.650586   11452 ssh_runner.go:149] Run: docker stop 93d00a3ca57a 3fa6bb8a56ae a32eecb90b1b f6d5d44ee6e5 d960ab78b04e 8543b072b7ef 2e8f26a227cb c248621e24fa 88497c18555d 51b22bf15449
	I0310 21:06:20.184867   11452 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0310 21:06:20.314455   11452 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:06:20.369100   11452 kubeadm.go:153] found existing configuration files:
	-rw------- 1 root root 5743 Mar 10 20:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Mar 10 20:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5923 Mar 10 20:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5727 Mar 10 20:57 /etc/kubernetes/scheduler.conf
	
	I0310 21:06:20.393275   11452 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0310 21:06:20.609705   11452 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0310 21:06:20.745984   11452 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0310 21:06:20.864911   11452 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0310 21:06:20.997980   11452 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 21:06:21.123712   11452 kubeadm.go:670] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0310 21:06:21.124457   11452 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:06:23.880124   11452 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": (2.7556727s)
	I0310 21:06:23.880124   11452 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:06:41.636041   11452 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (17.7559524s)
	I0310 21:06:41.636041   11452 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:06:46.498300   11452 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (4.8622677s)
	I0310 21:06:46.498694   11452 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:06:48.039255   11452 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml": (1.5405638s)
	I0310 21:06:48.039255   11452 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:06:48.801644   11452 kubeadm.go:687] waiting for restarted kubelet to initialise ...
	I0310 21:06:48.815230   11452 retry.go:31] will retry after 276.165072ms: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:06:49.099699   11452 retry.go:31] will retry after 540.190908ms: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:06:49.653105   11452 retry.go:31] will retry after 655.06503ms: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:06:50.319233   11452 retry.go:31] will retry after 791.196345ms: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:06:51.127823   11452 retry.go:31] will retry after 1.170244332s: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:06:52.336661   11452 retry.go:31] will retry after 2.253109428s: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:06:54.606018   11452 retry.go:31] will retry after 1.610739793s: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:06:56.225972   11452 retry.go:31] will retry after 2.804311738s: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:06:59.047718   11452 retry.go:31] will retry after 3.824918958s: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:07:02.880767   11452 retry.go:31] will retry after 7.69743562s: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:07:10.585412   11452 retry.go:31] will retry after 14.635568968s: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:07:25.237439   11452 retry.go:31] will retry after 28.406662371s: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:03.648102   11452 kubeadm.go:704] kubelet initialised
	I0310 21:08:03.648102   11452 kubeadm.go:705] duration metric: took 1m14.8465949s waiting for restarted kubelet to initialise ...
	I0310 21:08:03.648102   11452 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	I0310 21:08:03.648315   11452 pod_ready.go:59] waiting 4m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	I0310 21:08:11.895581   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:12.406760   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:12.907281   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:13.412480   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:13.904011   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:14.404483   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:14.903398   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:15.406317   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:15.909015   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:16.412236   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:16.921185   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:17.409497   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:17.909917   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:18.410303   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:18.915535   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:19.403771   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:19.904436   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:20.401814   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:20.902610   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:21.403686   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:21.905329   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:22.403463   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:22.903190   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:23.404222   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:23.906721   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:24.408282   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:24.906204   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:25.403002   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:25.904533   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:26.401140   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:26.913497   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:27.401895   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:27.911818   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:28.407413   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:28.902942   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:29.403210   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:08:39.897079   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:08:50.404938   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:09:00.901120   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:09:11.399061   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:09:21.902146   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:09:32.400233   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:09:42.897724   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:09:53.397464   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:09:59.988788   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:00.806087   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:01.061441   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:02.199122   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:03.183785   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:03.579819   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:04.367036   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:04.492512   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:05.119207   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:05.590058   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:05.994959   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:06.619431   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:07.110925   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:07.640130   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:08.056592   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:09.627095   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:10.358648   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:10.606421   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:11.075027   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:11.710778   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:12.188980   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:12.847392   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:14.379595   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:14.769729   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:15.454034   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:16.649246   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:17.245713   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:18.379839   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:18.530538   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:19.559592   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:20.435121   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:21.123082   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:21.501649   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:22.090272   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:22.479067   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:22.960300   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:23.513921   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:23.999500   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:24.446110   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:25.414395   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:26.715007   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:26.950110   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:27.695669   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:28.043565   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:28.596812   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:28.944311   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:29.476095   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:29.958825   11452 pod_ready.go:109] pod with "kube-dns" label in "kube-system" namespace was not found, will retry
	I0310 21:10:30.426350   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:30.904952   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:31.404878   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:31.903868   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:32.407755   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:32.918900   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:33.405498   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:33.901477   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:34.405535   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:34.910291   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:35.401535   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:35.903658   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:36.406381   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:36.903136   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:37.404998   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:37.905106   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:38.408433   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:38.910412   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:39.404291   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:39.903293   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:40.401887   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:40.902788   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:41.404881   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:41.916703   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:42.408568   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:42.903290   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:43.405976   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:10:53.899104   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:11:04.399391   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:11:14.898520   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:11:25.399831   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:11:35.898462   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:11:46.398780   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:11:56.899537   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:12:07.399727   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:12:07.895314   11452 pod_ready.go:62] duration metric: took 4m4.2473827s to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	I0310 21:12:07.895314   11452 pod_ready.go:59] waiting 4m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	I0310 21:12:17.900154   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:12:28.404949   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:12:37.267751   11452 pod_ready.go:97] pod "etcd-old-k8s-version-20210310204459-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:08:14 +0000 GMT Reason: Message:}
	I0310 21:12:37.267751   11452 pod_ready.go:62] duration metric: took 29.3724782s to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	I0310 21:12:37.268055   11452 pod_ready.go:59] waiting 4m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	I0310 21:12:37.341155   11452 pod_ready.go:97] pod "kube-apiserver-old-k8s-version-20210310204459-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 20:58:33 +0000 GMT Reason: Message:}
	I0310 21:12:37.341155   11452 pod_ready.go:62] duration metric: took 73.1004ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	I0310 21:12:37.341155   11452 pod_ready.go:59] waiting 4m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	I0310 21:12:38.444306   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:39.676364   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:40.812667   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:42.057695   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:43.470722   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:44.606164   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:45.734123   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:46.804581   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:48.030376   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:49.199707   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:51.280979   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:53.272827   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:56.598226   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:57.839214   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:12:59.277594   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:00.804139   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:01.868486   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:03.762450   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:05.609769   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:07.699476   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:09.215487   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:10.222082   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:11.664728   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:14.018661   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:15.188555   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:19.310940   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:20.542067   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:21.744034   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:27.427348   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:32.379163   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:33.678057   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:36.811079   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:42.113288   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:43.191321   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:44.682198   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:45.813059   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:47.104739   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:48.481166   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:50.478535   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:52.097972   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:53.167550   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:54.660908   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:56.043628   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:57.390515   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:13:58.421032   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:00.744278   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:01.907357   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:03.282501   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:04.519470   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:06.185863   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:07.609393   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:08.728659   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:10.605359   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:11.775469   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:13.031040   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:14.854875   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:10:24 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:16.109061   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:18.375635   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:19.414942   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:22.540932   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:25.659175   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:27.265850   11452 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-20210310204459-6496" in "kube-system" namespace is not Ready: {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:11:53 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-controller-manager]}
	I0310 21:14:30.236809   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": http2: server sent GOAWAY and closed the connection; LastStreamID=245, ErrCode=NO_ERROR, debug=""
	I0310 21:14:30.518110   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:31.008764   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:31.515416   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:32.008526   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:32.511618   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:33.006554   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:33.508835   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:34.006012   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:34.508808   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:35.008527   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:35.503061   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:36.012759   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:36.507550   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:37.022896   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:37.504077   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:38.005885   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:38.516331   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:39.014402   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:39.512788   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:40.006823   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:40.506866   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:41.011998   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:41.501870   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:42.009052   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:42.508741   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:43.012072   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:43.511188   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:44.009340   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:44.505571   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:45.007257   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:45.507791   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:46.009858   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:46.523931   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:47.010222   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:47.507744   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:48.005253   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:48.527681   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:49.007192   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:49.508468   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:50.009531   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:50.512473   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:51.020200   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:51.503133   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:52.006227   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:52.508045   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:53.008513   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:53.511441   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:54.014370   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:54.518560   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:55.021296   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:55.518974   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:56.009392   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:56.504675   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:57.009681   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:57.509160   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:58.007650   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:58.506651   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:59.006679   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:14:59.504824   11452 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55163/api/v1/namespaces/kube-system/pods": EOF

                                                
                                                
** /stderr **
start_stop_delete_test.go:199: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-20210310204459-6496 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.14.0": exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect old-k8s-version-20210310204459-6496
helpers_test.go:231: (dbg) docker inspect old-k8s-version-20210310204459-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419",
	        "Created": "2021-03-10T20:45:16.4180529Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265644,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T21:02:53.6945401Z",
	            "FinishedAt": "2021-03-10T21:02:38.7364572Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/hostname",
	        "HostsPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/hosts",
	        "LogPath": "/var/lib/docker/containers/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419/d7a1199287f1fdb7f8b058979a38f733bc3317511901e4ffb9c95e80b30cb419-json.log",
	        "Name": "/old-k8s-version-20210310204459-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210310204459-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c787353156cac5cce99362e2972764f7959adcd0ea6e0691e479c3a350d5be1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210310204459-6496",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210310204459-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210310204459-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210310204459-6496",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210310204459-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1f8e0905a7f62bf44cf454f8103c113efc21f92362832ee48b968a403b653c0e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55166"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55165"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55164"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55163"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1f8e0905a7f6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "fd6a52eae142ad2c25ea262b4fa78725848a1c018d8617e6378d0e3903b4cfe0",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "fd6a52eae142ad2c25ea262b4fa78725848a1c018d8617e6378d0e3903b4cfe0",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210310204459-6496 -n old-k8s-version-20210310204459-6496

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210310204459-6496 -n old-k8s-version-20210310204459-6496: exit status 2 (1m45.5174403s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:16:46.978371   20512 status.go:405] Error apiserver status: https://127.0.0.1:55163/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 2 (may be ok)
helpers_test.go:240: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-20210310204459-6496 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
helpers_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p old-k8s-version-20210310204459-6496 logs -n 25: exit status 110 (2m4.4589749s)

                                                
                                                
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 21:02:58 UTC, end at Wed 2021-03-10 21:17:45 UTC. --
	* Mar 10 21:03:03 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:03.776371500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 21:03:03 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:03.776525400Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 21:03:03 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:03.777084100Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 21:03:03 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:03.788506500Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 21:03:03 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:03.797927200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 21:03:03 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:03.798297300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 21:03:03 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:03.798456100Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 21:03:03 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:03.920270500Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 21:03:04 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:04.106952400Z" level=info msg="Loading containers: start."
	* Mar 10 21:03:08 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:08.361304600Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 21:03:09 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:09.782737700Z" level=info msg="Loading containers: done."
	* Mar 10 21:03:10 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:10.924930200Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 21:03:10 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:10.925126700Z" level=info msg="Daemon has completed initialization"
	* Mar 10 21:03:11 old-k8s-version-20210310204459-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 21:03:12 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:12.487230400Z" level=info msg="API listen on [::]:2376"
	* Mar 10 21:03:12 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:03:12.908490600Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 21:08:12 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:08:12.765087100Z" level=info msg="ignoring event" container=7d9274cfe3d49359909aa03348308b4483232f8e6c4d86e51992ef2dd945d024 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:08:18 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:08:18.339135300Z" level=info msg="ignoring event" container=12ed0ee17be3db7bd5998178a5735050cf9e05ca5ef9b11806a5443c5e0a9717 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:09:50 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:09:50.207525600Z" level=info msg="ignoring event" container=d811294e3dab989a427f4cb94b97a8a98aa66ac34b99971ead398604ac0c1dd6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:10:31 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:10:31.853130300Z" level=info msg="ignoring event" container=703419ba882062989f8d4aabc10a140d4c3a74b6b00bff206ac22578e5bf6bdf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:11:47 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:11:47.128704100Z" level=info msg="ignoring event" container=ef56e7fbbd25b9f5e2bb950d6b1a5d213a73c81629b0844322246e18878a8ccd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:14:36 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:14:36.904367100Z" level=info msg="ignoring event" container=3cfe005e6d79c55d1d60125ed0265961159002896ae465c2a8a9141e74e54d4b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:14:38 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:14:38.120232200Z" level=info msg="ignoring event" container=39918a6e888a039af1e71d56a636efe803741262deefcc357f433f3e74d92c73 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:17:20 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:17:20.530561800Z" level=info msg="ignoring event" container=65eb6e51b3bfbe81425a2a308149b52ac7ec78a43333f198a42d0326bb328619 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:17:36 old-k8s-version-20210310204459-6496 dockerd[213]: time="2021-03-10T21:17:36.389174600Z" level=info msg="ignoring event" container=52b27338f4dc81c1437e80d550cdb8c75ad2117bd99397c9dab16d7096fc1e11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	* 65eb6e51b3bfb       b95b1efa0436b       About a minute ago   Exited              kube-controller-manager   8                   b45e848e030ab
	* 52b27338f4dc8       ecf910f40d6e0       3 minutes ago        Exited              kube-apiserver            5                   f7a599b23e3d0
	* 1e5a622dbe0d2       2c4adeb21b4ff       9 minutes ago        Running             etcd                      1                   221e90b8731e4
	* ead0693ca8e7c       00638a24688b0       10 minutes ago       Running             kube-scheduler            1                   4dd30d2505a69
	* 8543b072b7ef4       00638a24688b0       20 minutes ago       Exited              kube-scheduler            0                   c248621e24fad
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [1e5a622dbe0d] <==
	* 2021-03-10 21:17:04.207003 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:auth-delegator\" " with result "range_response_count:1 size:407" took too long (218.7139ms) to execute
	* 2021-03-10 21:17:04.234574 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (364.8977ms) to execute
	* 2021-03-10 21:17:04.869334 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (164.8417ms) to execute
	* 2021-03-10 21:17:05.765541 W | etcdserver: read-only range request "key:\"/registry/statefulsets\" range_end:\"/registry/statefulsett\" count_only:true " with result "range_response_count:0 size:5" took too long (317.6859ms) to execute
	* 2021-03-10 21:17:05.766301 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:kube-dns\" " with result "range_response_count:1 size:332" took too long (126.3974ms) to execute
	* 2021-03-10 21:17:05.871181 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (247.5491ms) to execute
	* 2021-03-10 21:17:05.887485 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (200.8323ms) to execute
	* 2021-03-10 21:17:07.713663 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:controller:endpoint-controller\" " with result "range_response_count:1 size:495" took too long (118.2791ms) to execute
	* 2021-03-10 21:17:13.863869 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496\" " with result "range_response_count:1 size:3577" took too long (126.8582ms) to execute
	* 2021-03-10 21:17:14.252997 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:pod-garbage-collector\" " with result "range_response_count:1 size:495" took too long (497.7685ms) to execute
	* 2021-03-10 21:17:14.342564 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (517.5963ms) to execute
	* 2021-03-10 21:17:16.622964 W | etcdserver: read-only range request "key:\"/registry/deployments\" range_end:\"/registry/deploymentt\" count_only:true " with result "range_response_count:0 size:5" took too long (104.3374ms) to execute
	* 2021-03-10 21:17:16.623491 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (122.3316ms) to execute
	* 2021-03-10 21:17:17.073375 W | etcdserver: request "header:<ID:12691275820038656336 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.3\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/172.17.0.3\" value_size:65 lease:3467903783183880526 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.3\" > >>" with result "size:16" took too long (251.6557ms) to execute
	* 2021-03-10 21:17:17.073499 W | etcdserver: read-only range request "key:\"/registry/namespaces\" range_end:\"/registry/namespacet\" count_only:true " with result "range_response_count:0 size:7" took too long (109.6359ms) to execute
	* 2021-03-10 21:17:19.424352 W | etcdserver: request "header:<ID:12691275820038656341 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b173f44c9cd74\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b173f44c9cd74\" value_size:478 lease:3467903783183880476 >> failure:<>>" with result "size:16" took too long (413.846ms) to execute
	* 2021-03-10 21:17:20.407193 W | etcdserver: request "header:<ID:12691275820038656342 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496\" mod_revision:298 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496\" value_size:3471 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496\" > >>" with result "size:16" took too long (241.3489ms) to execute
	* 2021-03-10 21:17:22.258853 W | etcdserver: read-only range request "key:\"/registry/minions\" range_end:\"/registry/miniont\" count_only:true " with result "range_response_count:0 size:7" took too long (131.3869ms) to execute
	* 2021-03-10 21:17:22.275700 W | etcdserver: read-only range request "key:\"/registry/clusterroles\" range_end:\"/registry/clusterrolet\" count_only:true " with result "range_response_count:0 size:7" took too long (149.6723ms) to execute
	* 2021-03-10 21:17:22.285148 W | etcdserver: read-only range request "key:\"/registry/events\" range_end:\"/registry/eventt\" count_only:true " with result "range_response_count:0 size:7" took too long (213.4102ms) to execute
	* 2021-03-10 21:17:27.383987 W | etcdserver: request "header:<ID:12691275820038656364 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.3\" mod_revision:324 > success:<request_put:<key:\"/registry/masterleases/172.17.0.3\" value_size:65 lease:3467903783183880554 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.3\" > >>" with result "size:16" took too long (674.0115ms) to execute
	* 2021-03-10 21:17:28.054068 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-20210310204459-6496\" " with result "error:context canceled" took too long (223.8047ms) to execute
	* 2021-03-10 21:17:28.109095 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:129" took too long (555.3024ms) to execute
	* 2021-03-10 21:17:28.159893 W | etcdserver: request "header:<ID:12691275820038656368 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b17415877b244\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b17415877b244\" value_size:478 lease:3467903783183880476 >> failure:<>>" with result "size:16" took too long (599.7043ms) to execute
	* 2021-03-10 21:17:28.233972 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:5" took too long (127.1933ms) to execute
	* 
	* ==> kernel <==
	*  21:18:07 up  2:18,  0 users,  load average: 125.12, 147.33, 146.53
	* Linux old-k8s-version-20210310204459-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [52b27338f4dc] <==
	* I0310 21:17:26.722034       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	* I0310 21:17:26.722367       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	* I0310 21:17:27.468099       1 trace.go:81] Trace[845357900]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2021-03-10 21:17:26.6879284 +0000 UTC m=+142.689421301) (total time: 780.0925ms):
	* Trace[845357900]: [780.0404ms] [762.1606ms] Transaction committed
	* I0310 21:17:27.544517       1 trace.go:81] Trace[835601160]: "Get /api/v1/namespaces/kube-system/pods/kube-apiserver-old-k8s-version-20210310204459-6496" (started: 2021-03-10 21:17:26.8210504 +0000 UTC m=+142.822543301) (total time: 723.4263ms):
	* Trace[835601160]: [722.4816ms] [722.4604ms] About to write a response
	* I0310 21:17:27.722482       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	* I0310 21:17:27.722696       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	* I0310 21:17:27.924989       1 available_controller.go:332] Shutting down AvailableConditionController
	* I0310 21:17:27.925060       1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
	* I0310 21:17:27.925093       1 crdregistration_controller.go:143] Shutting down crd-autoregister controller
	* I0310 21:17:27.925192       1 crd_finalizer.go:254] Shutting down CRDFinalizer
	* I0310 21:17:27.934387       1 establishing_controller.go:84] Shutting down EstablishingController
	* I0310 21:17:27.934417       1 naming_controller.go:295] Shutting down NamingConditionController
	* I0310 21:17:27.934441       1 customresource_discovery_controller.go:219] Shutting down DiscoveryController
	* I0310 21:17:27.934602       1 autoregister_controller.go:163] Shutting down autoregister controller
	* I0310 21:17:27.972923       1 controller.go:87] Shutting down OpenAPI AggregationController
	* I0310 21:17:27.973172       1 controller.go:176] Shutting down kubernetes service endpoint reconciler
	* I0310 21:17:27.978613       1 secure_serving.go:160] Stopped listening on [::]:8443
	* E0310 21:17:27.986436       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	* I0310 21:17:27.986832       1 trace.go:81] Trace[691174527]: "Create /api/v1/namespaces/kube-system/events" (started: 2021-03-10 21:17:26.930863 +0000 UTC m=+142.932355901) (total time: 1.0559307s):
	* Trace[691174527]: [625.2816ms] [625.2816ms] About to convert to expected version
	* Trace[691174527]: [1.0559307s] [430.3291ms] END
	* E0310 21:17:28.002140       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	* E0310 21:17:28.244715       1 controller.go:179] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused
	* 
	* ==> kube-controller-manager [65eb6e51b3bf] <==
	* I0310 21:16:51.159074       1 serving.go:319] Generated self-signed cert in-memory
	* I0310 21:17:03.635185       1 controllermanager.go:155] Version: v1.14.0
	* I0310 21:17:03.673687       1 secure_serving.go:116] Serving securely on 127.0.0.1:10257
	* I0310 21:17:03.683977       1 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252
	* F0310 21:17:15.083789       1 controllermanager.go:213] error building controller context: failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server ("[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration
ok\n[+]autoregister-completion ok\nhealthz check failed") has prevented the request from succeeding
	* 
	* ==> kube-scheduler [8543b072b7ef] <==
	* I0310 21:01:33.371320       1 trace.go:81] Trace[959528382]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:01:23.2693815 +0000 UTC m=+220.745030501) (total time: 10.0905247s):
	* Trace[959528382]: [10.0905247s] [10.0905247s] END
	* E0310 21:01:33.371344       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:01:33.453019       1 trace.go:81] Trace[1995257584]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:01:23.4417654 +0000 UTC m=+220.917414401) (total time: 10.0112104s):
	* Trace[1995257584]: [10.0112104s] [10.0112104s] END
	* E0310 21:01:33.453054       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:01:33.461459       1 trace.go:81] Trace[1520903024]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:01:23.4003119 +0000 UTC m=+220.875960901) (total time: 10.0611178s):
	* Trace[1520903024]: [10.0611178s] [10.0611178s] END
	* E0310 21:01:33.461478       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:01:33.461700       1 trace.go:81] Trace[1242194376]: "Reflector k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223 ListAndWatch" (started: 2021-03-10 21:01:23.4138321 +0000 UTC m=+220.889481101) (total time: 10.0478446s):
	* Trace[1242194376]: [10.0478446s] [10.0478446s] END
	* E0310 21:01:33.461715       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:01:33.516709       1 trace.go:81] Trace[301242736]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:01:23.5085938 +0000 UTC m=+220.984243001) (total time: 10.0080644s):
	* Trace[301242736]: [10.0080644s] [10.0080644s] END
	* E0310 21:01:33.516744       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:01:33.537781       1 trace.go:81] Trace[423923082]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:01:23.5095769 +0000 UTC m=+220.985225901) (total time: 10.0281623s):
	* Trace[423923082]: [10.0281623s] [10.0281623s] END
	* E0310 21:01:33.537805       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* E0310 21:01:41.367888       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 21:01:41.386600       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 21:01:41.386708       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:01:41.386859       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 21:01:41.510873       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 21:01:41.816972       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:01:41.864817       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* 
	* ==> kube-scheduler [ead0693ca8e7] <==
	* E0310 21:18:35.516142       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:18:35.516320       1 trace.go:81] Trace[323849879]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:18:25.2155472 +0000 UTC m=+635.789326401) (total time: 10.3007517s):
	* Trace[323849879]: [10.3007517s] [10.3007517s] END
	* E0310 21:18:35.516335       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:18:35.535485       1 trace.go:81] Trace[560558871]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:18:25.4750996 +0000 UTC m=+636.048878901) (total time: 10.0603267s):
	* Trace[560558871]: [10.0603267s] [10.0603267s] END
	* E0310 21:18:35.535505       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:18:35.553480       1 trace.go:81] Trace[626042296]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:18:25.5522512 +0000 UTC m=+636.126030401) (total time: 10.0011966s):
	* Trace[626042296]: [10.0011966s] [10.0011966s] END
	* E0310 21:18:35.553502       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:18:35.558014       1 trace.go:81] Trace[1643900448]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:18:25.2185642 +0000 UTC m=+635.792343501) (total time: 10.3350748s):
	* Trace[1643900448]: [10.3350748s] [10.3350748s] END
	* E0310 21:18:35.558036       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:18:35.558414       1 trace.go:81] Trace[704768518]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:18:25.5238065 +0000 UTC m=+636.097585701) (total time: 10.0345821s):
	* Trace[704768518]: [10.0345821s] [10.0345821s] END
	* E0310 21:18:35.558428       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:18:35.558640       1 trace.go:81] Trace[559522170]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:18:25.5102528 +0000 UTC m=+636.084032001) (total time: 10.0483586s):
	* Trace[559522170]: [10.0483586s] [10.0483586s] END
	* E0310 21:18:35.558654       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:18:35.577469       1 trace.go:81] Trace[1627969384]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:18:25.5711232 +0000 UTC m=+636.144902501) (total time: 10.0063099s):
	* Trace[1627969384]: [10.0063099s] [10.0063099s] END
	* E0310 21:18:35.577519       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* I0310 21:18:35.891799       1 trace.go:81] Trace[1286010412]: "Reflector k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223 ListAndWatch" (started: 2021-03-10 21:18:25.856948 +0000 UTC m=+636.430727201) (total time: 10.0348061s):
	* Trace[1286010412]: [10.0348061s] [10.0348061s] END
	* E0310 21:18:35.891993       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 21:02:58 UTC, end at Wed 2021-03-10 21:18:44 UTC. --
	* Mar 10 21:18:28 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:28.115460    1266 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "old-k8s-version-20210310204459-6496": Get https://control-plane.minikube.internal:8443/api/v1/nodes/old-k8s-version-20210310204459-6496?resourceVersion=0&timeout=10s: net/http: TLS handshake timeout
	* Mar 10 21:18:28 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:28.116107    1266 controller.go:115] failed to ensure node lease exists, will retry in 7s, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/old-k8s-version-20210310204459-6496?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	* Mar 10 21:18:29 old-k8s-version-20210310204459-6496 kubelet[1266]: I0310 21:18:29.011454    1266 trace.go:81] Trace[77182125]: "Reflector k8s.io/kubernetes/pkg/kubelet/kubelet.go:442 ListAndWatch" (started: 2021-03-10 21:18:14.1047808 +0000 UTC m=+687.409467301) (total time: 14.9066037s):
	* Mar 10 21:18:29 old-k8s-version-20210310204459-6496 kubelet[1266]: Trace[77182125]: [14.9066037s] [14.9066037s] END
	* Mar 10 21:18:29 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:29.171027    1266 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* Mar 10 21:18:32 old-k8s-version-20210310204459-6496 kubelet[1266]: W0310 21:18:32.456456    1266 status_manager.go:485] Failed to get status for pod "kube-apiserver-old-k8s-version-20210310204459-6496_kube-system(6594117763a723d3f0b9cba82c1aa6a7)": Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-old-k8s-version-20210310204459-6496: net/http: TLS handshake timeout
	* Mar 10 21:18:38 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:38.401566    1266 event.go:200] Unable to write event: 'Post https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events: net/http: TLS handshake timeout' (may retry after sleeping)
	* Mar 10 21:18:39 old-k8s-version-20210310204459-6496 kubelet[1266]: I0310 21:18:39.899689    1266 trace.go:81] Trace[2040830434]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:18:27.821761 +0000 UTC m=+701.126447701) (total time: 12.0778585s):
	* Mar 10 21:18:39 old-k8s-version-20210310204459-6496 kubelet[1266]: Trace[2040830434]: [12.0778585s] [12.0778585s] END
	* Mar 10 21:18:39 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:39.922022    1266 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: read tcp 172.17.0.3:54128->172.17.0.3:8443: use of closed network connection
	* Mar 10 21:18:40 old-k8s-version-20210310204459-6496 kubelet[1266]: I0310 21:18:40.593374    1266 trace.go:81] Trace[875939635]: "Reflector k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 ListAndWatch" (started: 2021-03-10 21:18:29.26946 +0000 UTC m=+702.574146601) (total time: 11.3238527s):
	* Mar 10 21:18:40 old-k8s-version-20210310204459-6496 kubelet[1266]: Trace[875939635]: [11.3238527s] [11.3238527s] END
	* Mar 10 21:18:40 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:40.659964    1266 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dold-k8s-version-20210310204459-6496&limit=500&resourceVersion=0: read tcp 172.17.0.3:54212->172.17.0.3:8443: use of closed network connection
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: I0310 21:18:41.206352    1266 trace.go:81] Trace[1363237971]: "Reflector k8s.io/kubernetes/pkg/kubelet/kubelet.go:451 ListAndWatch" (started: 2021-03-10 21:18:27.7877446 +0000 UTC m=+701.092431101) (total time: 13.4216929s):
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: Trace[1363237971]: [13.4216929s] [13.4216929s] END
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:41.207321    1266 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dold-k8s-version-20210310204459-6496&limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: I0310 21:18:41.207666    1266 trace.go:81] Trace[607655183]: "Reflector k8s.io/client-go/informers/factory.go:133 ListAndWatch" (started: 2021-03-10 21:18:28.5074656 +0000 UTC m=+701.812152201) (total time: 12.703307s):
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: Trace[607655183]: [12.703307s] [12.703307s] END
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:41.208075    1266 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: I0310 21:18:41.208402    1266 trace.go:81] Trace[2105461129]: "Reflector k8s.io/kubernetes/pkg/kubelet/kubelet.go:442 ListAndWatch" (started: 2021-03-10 21:18:30.8350438 +0000 UTC m=+704.139730401) (total time: 10.3764629s):
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: Trace[2105461129]: [10.3764629s] [10.3764629s] END
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:41.208628    1266 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:41.312221    1266 controller.go:115] failed to ensure node lease exists, will retry in 7s, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/old-k8s-version-20210310204459-6496?timeout=10s: read tcp 172.17.0.3:54376->172.17.0.3:8443: use of closed network connection
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: W0310 21:18:41.312829    1266 status_manager.go:485] Failed to get status for pod "kube-controller-manager-old-k8s-version-20210310204459-6496_kube-system(3a9cb0607c644e32b5d6d0cd9bcdb263)": Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-old-k8s-version-20210310204459-6496: read tcp 172.17.0.3:54270->172.17.0.3:8443: use of closed network connection
	* Mar 10 21:18:41 old-k8s-version-20210310204459-6496 kubelet[1266]: E0310 21:18:41.313022    1266 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "old-k8s-version-20210310204459-6496": Get https://control-plane.minikube.internal:8443/api/v1/nodes/old-k8s-version-20210310204459-6496?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                      Args                      |                    Profile                     |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p      | nospam-20210310201637-6496                     | nospam-20210310201637-6496                     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:41:42 GMT | Wed, 10 Mar 2021 20:44:25 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p nospam-20210310201637-6496                  | nospam-20210310201637-6496                     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:44:37 GMT | Wed, 10 Mar 2021 20:44:59 GMT |
	| -p      | docker-flags-20210310201637-6496               | docker-flags-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:47:18 GMT | Wed, 10 Mar 2021 20:49:03 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | docker-flags-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:21 GMT | Wed, 10 Mar 2021 20:49:47 GMT |
	|         | docker-flags-20210310201637-6496               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | force-systemd-env-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:41 GMT | Wed, 10 Mar 2021 20:50:17 GMT |
	|         | force-systemd-env-20210310201637-6496          |                                                |                         |         |                               |                               |
	| -p      | cert-options-20210310203249-6496               | cert-options-20210310203249-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:50:36 GMT | Wed, 10 Mar 2021 20:50:43 GMT |
	|         | ssh openssl x509 -text -noout -in              |                                                |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt          |                                                |                         |         |                               |                               |
	| delete  | -p                                             | cert-options-20210310203249-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:10 GMT | Wed, 10 Mar 2021 20:51:56 GMT |
	|         | cert-options-20210310203249-6496               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | disable-driver-mounts-20210310205156-6496      | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:57 GMT | Wed, 10 Mar 2021 20:52:02 GMT |
	|         | disable-driver-mounts-20210310205156-6496      |                                                |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210310203447-6496         | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:53:03 GMT | Wed, 10 Mar 2021 20:53:44 GMT |
	|         | ssh docker info --format                       |                                                |                         |         |                               |                               |
	|         |                               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:54:07 GMT | Wed, 10 Mar 2021 20:54:36 GMT |
	|         | force-systemd-flag-20210310203447-6496         |                                                |                         |         |                               |                               |
	| stop    | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:19 GMT | Wed, 10 Mar 2021 21:02:40 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:42 GMT | Wed, 10 Mar 2021 21:02:42 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496                | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:07:05 GMT | Wed, 10 Mar 2021 21:08:33 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| start   | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:52:21 GMT | Wed, 10 Mar 2021 21:09:23 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                |                                                |                         |         |                               |                               |
	|         | -v=1 --driver=docker                           |                                                |                         |         |                               |                               |
	| logs    | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:09:23 GMT | Wed, 10 Mar 2021 21:10:51 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:10:52 GMT | Wed, 10 Mar 2021 21:11:13 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | running-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:45 GMT | Wed, 10 Mar 2021 21:12:11 GMT |
	|         | running-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| stop    | -p                                             | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:12:38 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:40 GMT | Wed, 10 Mar 2021 21:12:41 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	| -p      | kubernetes-upgrade-20210310201637-6496         | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:50 GMT | Wed, 10 Mar 2021 21:15:02 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:15 GMT | Wed, 10 Mar 2021 21:15:46 GMT |
	|         | kubernetes-upgrade-20210310201637-6496         |                                                |                         |         |                               |                               |
	| delete  | -p                                             | missing-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:38 GMT | Wed, 10 Mar 2021 21:16:03 GMT |
	|         | missing-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| -p      | default-k8s-different-port-20210310205202-6496 | default-k8s-different-port-20210310205202-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:16:15 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| stop    | -p                                             | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:57 GMT | Wed, 10 Mar 2021 21:16:31 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:16:33 GMT | Wed, 10 Mar 2021 21:16:34 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 21:16:34
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 21:16:34.629998    8732 out.go:239] Setting OutFile to fd 2796 ...
	* I0310 21:16:34.639079    8732 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:16:34.639079    8732 out.go:252] Setting ErrFile to fd 2716...
	* I0310 21:16:34.639079    8732 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:16:34.670474    8732 out.go:246] Setting JSON to false
	* I0310 21:16:34.673686    8732 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36460,"bootTime":1615374534,"procs":116,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 21:16:34.673686    8732 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 21:16:34.684664    8732 out.go:129] * [no-preload-20210310204947-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 21:16:31.431866    7648 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3931844s)
	* I0310 21:16:31.432531    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:16:31.902562    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	* I0310 21:16:32.483903    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 21:16:32.991193    7648 provision.go:86] duration metric: configureAuth took 3.8876411s
	* I0310 21:16:33.000139    7648 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:16:33.014798    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:16:33.697895    7648 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:33.698382    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	* I0310 21:16:33.698956    7648 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:16:34.769570    7648 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:16:34.769736    7648 ubuntu.go:71] root file system type: overlay
	* I0310 21:16:34.770328    7648 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:16:34.790353    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:16:35.527457    7648 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:35.529070    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	* I0310 21:16:35.529271    7648 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:16:34.688713    8732 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 21:16:34.691736    8732 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 21:16:35.335343    8732 docker.go:119] docker version: linux-20.10.2
	* I0310 21:16:35.342766    8732 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:16:36.447271    8732 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1045069s)
	* I0310 21:16:36.449233    8732 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:16:35.9512391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:16:31.921183   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 21:16:31.959516   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 21:16:31.971027   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 21:16:32.024377   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:32.136900   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 21:16:32.281294   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 21:16:32.336267   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 21:16:32.352879   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 21:16:32.446280   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:32.558295   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 21:16:32.792738   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 21:16:32.836791   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 21:16:32.858509   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 21:16:32.952556   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:33.186783   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 21:16:33.346011   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 21:16:33.397559   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 21:16:33.416230   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 21:16:33.487744   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:33.617189   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 21:16:33.830275   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 21:16:33.935207   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 21:16:33.944448   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 21:16:34.041522   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:34.233811   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 21:16:34.372370   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 21:16:34.509401   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 21:16:34.518413   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 21:16:34.702785   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:34.799350   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 21:16:34.926033   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 21:16:34.966995   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 21:16:34.966995   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 21:16:35.120646   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:35.249798   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 21:16:35.439454   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 21:16:35.491670   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 21:16:35.511821   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 21:16:35.609657   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:35.765770   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 21:16:35.933136   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 21:16:36.038232   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 21:16:36.049260   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 21:16:36.111357   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:36.202159   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 21:16:36.321397   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 21:16:36.435742   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 21:16:36.447025   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 21:16:36.526293   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:36.641234   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 21:16:36.760565   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 21:16:36.803836   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 21:16:36.810661   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 21:16:36.454277    8732 out.go:129] * Using the docker driver based on existing profile
	* I0310 21:16:36.454693    8732 start.go:276] selected driver: docker
	* I0310 21:16:36.455016    8732 start.go:718] validating driver "docker" against &{Name:no-preload-20210310204947-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.7 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:16:36.455554    8732 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 21:16:37.582610    8732 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:16:38.599163    8732 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0164175s)
	* I0310 21:16:38.599481    8732 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:16:38.1503189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:16:38.599696    8732 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 21:16:38.600314    8732 start_flags.go:398] config:
	* {Name:no-preload-20210310204947-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.7 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:16:33.812041   16712 machine.go:88] provisioning docker machine ...
	* I0310 21:16:33.812277   16712 ubuntu.go:169] provisioning hostname "calico-20210310211603-6496"
	* I0310 21:16:33.833066   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:16:34.524427   16712 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:34.536228   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	* I0310 21:16:34.536228   16712 main.go:121] libmachine: About to run SSH command:
	* sudo hostname calico-20210310211603-6496 && echo "calico-20210310211603-6496" | sudo tee /etc/hostname
	* I0310 21:16:34.556797   16712 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:16:38.610707    8732 out.go:129] * Starting control plane node no-preload-20210310204947-6496 in cluster no-preload-20210310204947-6496
	* I0310 21:16:39.326361    8732 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 21:16:39.326361    8732 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 21:16:39.326624    8732 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	* I0310 21:16:39.327083    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	* I0310 21:16:39.327083    8732 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\config.json ...
	* I0310 21:16:39.327358    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns:1.7.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0
	* I0310 21:16:39.327960    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2
	* I0310 21:16:39.328552    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4
	* I0310 21:16:39.328972    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0
	* I0310 21:16:39.328972    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0
	* I0310 21:16:39.328972    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	* I0310 21:16:39.328972    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd:3.4.13-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0
	* I0310 21:16:39.329289    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0
	* I0310 21:16:39.329426    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0
	* I0310 21:16:39.359546    8732 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 21:16:39.359546    8732 start.go:313] acquiring machines lock for no-preload-20210310204947-6496: {Name:mk5ccb5ca2d8ac74aacc5a5439e34ebf8c484f4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.360549    8732 start.go:317] acquired machines lock for "no-preload-20210310204947-6496" in 1.0034ms
	* I0310 21:16:39.361059    8732 start.go:93] Skipping create...Using existing machine configuration
	* I0310 21:16:39.361059    8732 fix.go:55] fixHost starting: 
	* I0310 21:16:39.559145    8732 cli_runner.go:115] Run: docker container inspect no-preload-20210310204947-6496 --format=
	* I0310 21:16:39.777868    8732 cache.go:93] acquiring lock: {Name:mk4f17964ab104a7a51fdfe4d0d8adcb99a8f701 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.779091    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0 exists
	* I0310 21:16:39.779459    8732 cache.go:82] cache image "k8s.gcr.io/kube-proxy:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.20.5-rc.0" took 450.0335ms
	* I0310 21:16:39.779658    8732 cache.go:66] save to tar file k8s.gcr.io/kube-proxy:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0 succeeded
	* I0310 21:16:39.792243    8732 cache.go:93] acquiring lock: {Name:mk808ab2b8e2f585b88e9b77052dedca3569e605 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.792918    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0 exists
	* I0310 21:16:39.792918    8732 cache.go:82] cache image "k8s.gcr.io/coredns:1.7.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\coredns_1.7.0" took 465.5608ms
	* I0310 21:16:39.792918    8732 cache.go:66] save to tar file k8s.gcr.io/coredns:1.7.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0 succeeded
	* I0310 21:16:39.806022    8732 cache.go:93] acquiring lock: {Name:mk1bbd52b1d425b987a80d1b42ea65a1daa62351 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.807260    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 exists
	* I0310 21:16:39.807585    8732 cache.go:93] acquiring lock: {Name:mk1cd59bbb5d30900e0d5b8983f100ccfb4e941e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.807768    8732 cache.go:82] cache image "k8s.gcr.io/pause:3.2" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.2" took 479.6256ms
	* I0310 21:16:39.807768    8732 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 succeeded
	* I0310 21:16:39.808108    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0 exists
	* I0310 21:16:39.808287    8732 cache.go:82] cache image "k8s.gcr.io/kube-apiserver:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.20.5-rc.0" took 478.8622ms
	* I0310 21:16:39.809506    8732 cache.go:66] save to tar file k8s.gcr.io/kube-apiserver:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0 succeeded
	* I0310 21:16:39.843496    8732 cache.go:93] acquiring lock: {Name:mk7dad12c4700ffd6e4a91c1377bd452302d3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.843782    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0 exists
	* I0310 21:16:39.844642    8732 cache.go:82] cache image "k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.20.5-rc.0" took 515.6705ms
	* I0310 21:16:39.844642    8732 cache.go:66] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0 succeeded
	* I0310 21:16:39.846112    8732 cache.go:93] acquiring lock: {Name:mk95277aa1d8baa6ce693324ce93a259561b3b0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.846112    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 exists
	* I0310 21:16:39.847112    8732 cache.go:82] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.4" took 520.0303ms
	* I0310 21:16:39.847112    8732 cache.go:66] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 succeeded
	* I0310 21:16:39.866856    8732 cache.go:93] acquiring lock: {Name:mkf95068147fb9802daffb44f03793cdfc94af80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.867306    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 exists
	* I0310 21:16:39.868042    8732 cache.go:82] cache image "gcr.io/k8s-minikube/storage-provisioner:v4" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v4" took 539.4903ms
	* I0310 21:16:39.868192    8732 cache.go:66] save to tar file gcr.io/k8s-minikube/storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 succeeded
	* I0310 21:16:39.868192    8732 cache.go:93] acquiring lock: {Name:mk1b99eb2e55fdc5ddc042a4b3db75d12b25fe0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.868803    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0 exists
	* I0310 21:16:39.868979    8732 cache.go:93] acquiring lock: {Name:mk7d69590a92a29aed7b81b57dbd7aa08bae9b7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.868979    8732 cache.go:82] cache image "k8s.gcr.io/kube-scheduler:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.20.5-rc.0" took 540.0075ms
	* I0310 21:16:39.868979    8732 cache.go:66] save to tar file k8s.gcr.io/kube-scheduler:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0 succeeded
	* I0310 21:16:39.868979    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0 exists
	* I0310 21:16:39.868979    8732 cache.go:82] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.4.13-0" took 539.5537ms
	* I0310 21:16:39.868979    8732 cache.go:66] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0 succeeded
	* I0310 21:16:39.871389    8732 cache.go:93] acquiring lock: {Name:mk33908c5692f6fbcea93524c073786bb1491be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:16:39.871967    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 exists
	* I0310 21:16:39.872140    8732 cache.go:82] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.1.0" took 542.851ms
	* I0310 21:16:39.872441    8732 cache.go:66] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 succeeded
	* I0310 21:16:39.872571    8732 cache.go:73] Successfully saved all images to host disk.
	* I0310 21:16:40.303920    8732 fix.go:108] recreateIfNeeded on no-preload-20210310204947-6496: state=Stopped err=<nil>
	* W0310 21:16:40.304496    8732 fix.go:134] unexpected machine state, will restart: <nil>
	* I0310 21:16:36.707506    7648 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:16:36.721035    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:16:37.369940    7648 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:37.370362    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	* I0310 21:16:37.370581    7648 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 21:16:36.894168   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:36.975012   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 21:16:37.145547   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 21:16:37.232541   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 21:16:37.256191   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 21:16:37.378101   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:37.467169   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 21:16:37.533453   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 21:16:37.656621   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 21:16:37.667571   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 21:16:37.779938   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:37.933898   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 21:16:38.696320   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 21:16:38.736172   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 21:16:38.757427   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 21:16:38.827599   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:38.987670   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 21:16:39.135835   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:16:39.180960   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:16:39.192268   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:16:39.231265   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 21:16:39.505315   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 21:16:39.730955   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 21:16:39.790291   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 21:16:39.864490   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 21:16:39.937802   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:40.039568   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 21:16:40.151587   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 21:16:40.196616   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 21:16:40.224510   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 21:16:40.281488   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 21:16:40.379387   22316 kubeadm.go:385] StartCluster: {Name:false-20210310211211-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:false-20210310211211-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.8 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:16:40.399115   22316 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:16:41.208974   22316 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 21:16:41.314735   22316 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 21:16:41.445294   22316 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 21:16:41.456004   22316 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 21:16:41.543999   22316 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 21:16:41.544273   22316 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 21:16:40.511063   16712 main.go:121] libmachine: SSH cmd err, output: <nil>: calico-20210310211603-6496
	* 
	* I0310 21:16:40.519859   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:16:41.141051   16712 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:41.141051   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	* I0310 21:16:41.141051   16712 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\scalico-20210310211603-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210310211603-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 calico-20210310211603-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:16:42.127463   16712 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:16:42.127463   16712 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:16:42.127463   16712 ubuntu.go:177] setting up certificates
	* I0310 21:16:42.127463   16712 provision.go:83] configureAuth start
	* I0310 21:16:42.136030   16712 cli_runner.go:115] Run: docker container inspect -f "" calico-20210310211603-6496
	* I0310 21:16:42.743751   16712 provision.go:137] copyHostCerts
	* I0310 21:16:42.744833   16712 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:16:42.744833   16712 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:16:42.745207   16712 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:16:42.752395   16712 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:16:42.752395   16712 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:16:42.752699   16712 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:16:42.756073   16712 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:16:42.756073   16712 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:16:42.756927   16712 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:16:42.759673   16712 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.calico-20210310211603-6496 san=[172.17.0.6 127.0.0.1 localhost 127.0.0.1 minikube calico-20210310211603-6496]
	* I0310 21:16:42.916138   16712 provision.go:165] copyRemoteCerts
	* I0310 21:16:42.926125   16712 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:16:42.932919   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:16:43.596240   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	* I0310 21:16:39.707897   12868 out.go:150]   - Generating certificates and keys ...
	* I0310 21:16:40.314753    8732 out.go:129] * Restarting existing docker container for "no-preload-20210310204947-6496" ...
	* I0310 21:16:40.334499    8732 cli_runner.go:115] Run: docker start no-preload-20210310204947-6496
	* I0310 21:16:43.717005    8732 cli_runner.go:168] Completed: docker start no-preload-20210310204947-6496: (3.3825109s)
	* I0310 21:16:43.725593    8732 cli_runner.go:115] Run: docker container inspect no-preload-20210310204947-6496 --format=
	* I0310 21:16:44.332193   16712 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4060694s)
	* I0310 21:16:44.332848   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:16:44.832885   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	* I0310 21:16:45.133396   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 21:16:45.671985   16712 provision.go:86] duration metric: configureAuth took 3.5445277s
	* I0310 21:16:45.671985   16712 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:16:45.688401   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:16:46.317701   16712 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:46.318821   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	* I0310 21:16:46.318821   16712 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:16:46.982541   16712 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:16:46.982541   16712 ubuntu.go:71] root file system type: overlay
	* I0310 21:16:46.982541   16712 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:16:46.993797   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:16:47.671142   16712 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:47.671142   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	* I0310 21:16:47.671142   16712 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:16:48.530303   16712 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:16:48.550988   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:16:44.337922    8732 kic.go:410] container "no-preload-20210310204947-6496" state is running.
	* I0310 21:16:44.350433    8732 cli_runner.go:115] Run: docker container inspect -f "" no-preload-20210310204947-6496
	* I0310 21:16:45.028858    8732 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\config.json ...
	* I0310 21:16:45.033643    8732 machine.go:88] provisioning docker machine ...
	* I0310 21:16:45.034260    8732 ubuntu.go:169] provisioning hostname "no-preload-20210310204947-6496"
	* I0310 21:16:45.044892    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:16:45.693970    8732 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:45.694402    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	* I0310 21:16:45.694402    8732 main.go:121] libmachine: About to run SSH command:
	* sudo hostname no-preload-20210310204947-6496 && echo "no-preload-20210310204947-6496" | sudo tee /etc/hostname
	* I0310 21:16:45.705001    8732 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:16:48.727564    8732 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:16:50.494137   18444 kubeadm.go:704] kubelet initialised
	* I0310 21:16:50.494473   18444 kubeadm.go:705] duration metric: took 1m4.8366926s waiting for restarted kubelet to initialise ...
	* I0310 21:16:50.494680   18444 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	* I0310 21:16:50.494991   18444 pod_ready.go:59] waiting 4m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	* I0310 21:16:50.500657   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:16:51.011622   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:16:51.511319   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:16:49.187765   16712 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:49.188999   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	* I0310 21:16:49.188999   16712 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 21:16:52.806209    8732 main.go:121] libmachine: SSH cmd err, output: <nil>: no-preload-20210310204947-6496
	* 
	* I0310 21:16:52.814403    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:16:53.441942    8732 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:53.442200    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	* I0310 21:16:53.442200    8732 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\sno-preload-20210310204947-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210310204947-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 no-preload-20210310204947-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:16:51.503309    7648 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 21:16:36.682285000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 21:16:51.503472    7648 machine.go:91] provisioned docker machine in 29.6843846s
	* I0310 21:16:51.503472    7648 client.go:171] LocalClient.Create took 59.1717123s
	* I0310 21:16:51.504124    7648 start.go:168] duration metric: libmachine.API.Create for "cilium-20210310211546-6496" took 59.1722108s
	* I0310 21:16:51.504124    7648 start.go:267] post-start starting for "cilium-20210310211546-6496" (driver="docker")
	* I0310 21:16:51.504265    7648 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:16:51.514957    7648 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:16:51.522127    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:16:52.109677    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:16:52.451452    7648 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:16:52.484699    7648 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:16:52.485654    7648 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:16:52.485654    7648 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:16:52.485654    7648 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:16:52.485654    7648 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:16:52.486314    7648 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:16:52.489038    7648 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:16:52.491146    7648 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:16:52.512974    7648 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:16:52.564122    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:16:52.803426    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:16:53.190385    7648 start.go:270] post-start completed in 1.6862636s
	* I0310 21:16:53.231217    7648 cli_runner.go:115] Run: docker container inspect -f "" cilium-20210310211546-6496
	* I0310 21:16:53.818281    7648 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\config.json ...
	* I0310 21:16:53.873454    7648 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:16:53.881015    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:16:54.497217    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:16:54.811004    7648 start.go:129] duration metric: createHost completed in 1m2.484093s
	* I0310 21:16:54.811004    7648 start.go:80] releasing machines lock for "cilium-20210310211546-6496", held for 1m2.4848319s
	* I0310 21:16:54.818877    7648 cli_runner.go:115] Run: docker container inspect -f "" cilium-20210310211546-6496
	* I0310 21:16:55.424282    7648 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:16:55.436603    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:16:55.436603    7648 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:16:55.442682    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:16:56.084335    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:16:56.135045    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:16:52.015706   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:16:52.508191   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:16:53.012452   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:16:54.673131    8732 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:16:54.673131    8732 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:16:54.673131    8732 ubuntu.go:177] setting up certificates
	* I0310 21:16:54.673131    8732 provision.go:83] configureAuth start
	* I0310 21:16:54.680882    8732 cli_runner.go:115] Run: docker container inspect -f "" no-preload-20210310204947-6496
	* I0310 21:16:55.311936    8732 provision.go:137] copyHostCerts
	* I0310 21:16:55.312404    8732 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:16:55.312404    8732 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:16:55.312404    8732 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:16:55.327767    8732 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:16:55.327953    8732 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:16:55.328258    8732 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:16:55.332034    8732 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:16:55.332034    8732 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:16:55.332621    8732 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:16:55.335371    8732 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.no-preload-20210310204947-6496 san=[172.17.0.7 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210310204947-6496]
	* I0310 21:16:55.577282    8732 provision.go:165] copyRemoteCerts
	* I0310 21:16:55.589849    8732 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:16:55.595491    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:16:56.213951    8732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55198 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	* I0310 21:16:56.837304    8732 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2474564s)
	* I0310 21:16:56.837877    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	* I0310 21:16:57.139185    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:16:57.464497    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	* I0310 21:16:57.785260    8732 provision.go:86] duration metric: configureAuth took 3.1121331s
	* I0310 21:16:57.785793    8732 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:16:57.801250    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:16:58.390912    8732 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:58.390912    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	* I0310 21:16:58.390912    8732 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:16:59.102294    8732 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:16:59.102294    8732 ubuntu.go:71] root file system type: overlay
	* I0310 21:16:59.102777    8732 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:16:59.103141    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:16:56.724050    7648 ssh_runner.go:189] Completed: systemctl --version: (1.2874485s)
	* I0310 21:16:56.729337    7648 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3050567s)
	* I0310 21:16:56.748709    7648 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:16:56.876780    7648 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:16:56.978673    7648 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:16:56.991056    7648 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:16:57.052842    7648 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:16:57.253661    7648 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:16:57.380378    7648 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:16:58.478430    7648 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0980537s)
	* I0310 21:16:58.492202    7648 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:16:58.608270    7648 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:16:59.466303    7648 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 21:16:59.476087    7648 cli_runner.go:115] Run: docker exec -t cilium-20210310211546-6496 dig +short host.docker.internal
	* I0310 21:17:00.570611    7648 cli_runner.go:168] Completed: docker exec -t cilium-20210310211546-6496 dig +short host.docker.internal: (1.0945251s)
	* I0310 21:17:00.570755    7648 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:17:00.579680    7648 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:17:00.614093    7648 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:17:00.771471    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:16:57.667209   22316 out.go:150]   - Generating certificates and keys ...
	* I0310 21:17:01.667854   16712 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 21:16:48.520209000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 21:17:01.668091   16712 machine.go:91] provisioned docker machine in 27.8560883s
	* I0310 21:17:01.668091   16712 client.go:171] LocalClient.Create took 52.8022654s
	* I0310 21:17:01.668091   16712 start.go:168] duration metric: libmachine.API.Create for "calico-20210310211603-6496" took 52.8032909s
	* I0310 21:17:01.668091   16712 start.go:267] post-start starting for "calico-20210310211603-6496" (driver="docker")
	* I0310 21:17:01.668091   16712 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:17:01.679776   16712 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:17:01.690857   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:17:02.232622   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	* I0310 21:17:02.753758   16712 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0739835s)
	* I0310 21:17:02.754532   16712 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:17:02.788501   16712 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:17:02.788501   16712 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:17:02.788501   16712 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:17:02.788501   16712 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:17:02.788851   16712 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:17:02.789197   16712 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:17:02.791914   16712 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:17:02.792401   16712 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:17:02.803372   16712 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:17:02.873691   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:17:03.415526   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:16:59.718624    8732 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:16:59.719292    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	* I0310 21:16:59.719509    8732 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:17:00.813707    8732 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:17:00.821329    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:17:01.402792    8732 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:17:01.403160    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	* I0310 21:17:01.403693    8732 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 21:17:02.686125    8732 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:17:02.686125    8732 machine.go:91] provisioned docker machine in 17.6518901s
	* I0310 21:17:02.686125    8732 start.go:267] post-start starting for "no-preload-20210310204947-6496" (driver="docker")
	* I0310 21:17:02.686125    8732 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:17:02.706959    8732 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:17:02.712857    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:17:03.328656    8732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55198 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	* I0310 21:17:04.113044    8732 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4060866s)
	* I0310 21:17:04.126379    8732 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:17:04.194515    8732 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:17:04.194656    8732 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:17:04.194656    8732 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:17:04.194656    8732 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:17:04.194903    8732 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:17:04.195509    8732 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:17:04.199360    8732 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:17:04.200809    8732 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:17:01.438931    7648 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\client.crt
	* I0310 21:17:01.449160    7648 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\client.key
	* I0310 21:17:01.452217    7648 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:17:01.452217    7648 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:17:01.460425    7648 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:17:02.054299    7648 docker.go:423] Got preloaded images: 
	* I0310 21:17:02.054511    7648 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	* I0310 21:17:02.057485    7648 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:17:02.146370    7648 ssh_runner.go:149] Run: which lz4
	* I0310 21:17:02.252478    7648 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 21:17:02.339889    7648 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 21:17:02.340316    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	* I0310 21:17:03.508657   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:17:03.973904   16712 start.go:270] post-start completed in 2.305817s
	* I0310 21:17:04.015466   16712 cli_runner.go:115] Run: docker container inspect -f "" calico-20210310211603-6496
	* I0310 21:17:04.676467   16712 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\config.json ...
	* I0310 21:17:04.725987   16712 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:17:04.739995   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:17:05.329901   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	* I0310 21:17:06.080331   16712 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3543456s)
	* I0310 21:17:06.080331   16712 start.go:129] duration metric: createHost completed in 57.2264679s
	* I0310 21:17:06.080331   16712 start.go:80] releasing machines lock for "calico-20210310211603-6496", held for 57.2264679s
	* I0310 21:17:06.091269   16712 cli_runner.go:115] Run: docker container inspect -f "" calico-20210310211603-6496
	* I0310 21:17:06.727530   16712 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:17:06.734852   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:17:06.736848   16712 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:17:06.745255   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:17:07.393700   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	* I0310 21:17:07.426097   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	* I0310 21:17:08.540119   16712 ssh_runner.go:189] Completed: systemctl --version: (1.8032739s)
	* I0310 21:17:08.553867   16712 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:17:04.217254    8732 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:17:04.292934    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:17:04.913283    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:17:05.815667    8732 start.go:270] post-start completed in 3.1295462s
	* I0310 21:17:05.840331    8732 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:17:05.848333    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:17:06.493213    8732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55198 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	* I0310 21:17:07.240444    8732 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3991092s)
	* I0310 21:17:07.240911    8732 fix.go:57] fixHost completed within 27.8798913s
	* I0310 21:17:07.240911    8732 start.go:80] releasing machines lock for "no-preload-20210310204947-6496", held for 27.8804014s
	* I0310 21:17:07.254996    8732 cli_runner.go:115] Run: docker container inspect -f "" no-preload-20210310204947-6496
	* I0310 21:17:07.846414    8732 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:17:07.856007    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:17:07.861061    8732 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:17:07.869292    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:17:08.484249    8732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55198 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	* I0310 21:17:08.510889    8732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55198 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	* I0310 21:17:08.993572   16712 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.265904s)
	* I0310 21:17:08.995950   16712 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:17:09.219729   16712 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:17:09.233360   16712 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:17:09.412079   16712 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:17:10.256600   16712 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:17:10.489479   16712 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:17:12.525124   16712 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (2.0356478s)
	* I0310 21:17:12.539525   16712 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:17:12.738987   16712 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:17:13.875573   16712 ssh_runner.go:189] Completed: docker version --format : (1.136587s)
	* I0310 21:17:11.846711   12868 out.go:150]   - Booting up control plane ...
	* I0310 21:17:09.793929    8732 ssh_runner.go:189] Completed: systemctl --version: (1.9328707s)
	* I0310 21:17:09.805991    8732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:17:11.041171    8732 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service containerd: (1.2351821s)
	* I0310 21:17:11.045389    8732 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (3.1986057s)
	* I0310 21:17:11.053744    8732 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:17:11.370173    8732 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:17:11.381746    8732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:17:11.604303    8732 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:17:11.934999    8732 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:17:12.307919    8732 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:17:14.006877   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:17:15.034724    8732 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (2.7266351s)
	* I0310 21:17:15.049223    8732 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:17:15.284083    8732 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:17:17.018992    8732 ssh_runner.go:189] Completed: docker version --format : (1.7349115s)
	* I0310 21:17:13.880869   16712 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 21:17:13.890523   16712 cli_runner.go:115] Run: docker exec -t calico-20210310211603-6496 dig +short host.docker.internal
	* I0310 21:17:15.188631   16712 cli_runner.go:168] Completed: docker exec -t calico-20210310211603-6496 dig +short host.docker.internal: (1.2979058s)
	* I0310 21:17:15.188631   16712 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:17:15.206029   16712 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:17:15.257967   16712 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:17:15.419609   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20210310211603-6496
	* I0310 21:17:16.036438   16712 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\client.crt
	* I0310 21:17:16.055119   16712 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\client.key
	* I0310 21:17:16.058703   16712 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:17:16.059117   16712 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:17:16.070563   16712 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:17:16.606686   16712 docker.go:423] Got preloaded images: 
	* I0310 21:17:16.606930   16712 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	* I0310 21:17:16.619991   16712 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:17:16.729235   16712 ssh_runner.go:149] Run: which lz4
	* I0310 21:17:16.822044   16712 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 21:17:16.915979   16712 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 21:17:16.915979   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	* I0310 21:17:17.023322    8732 out.go:150] * Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...
	* I0310 21:17:17.031071    8732 cli_runner.go:115] Run: docker exec -t no-preload-20210310204947-6496 dig +short host.docker.internal
	* I0310 21:17:18.396533    8732 cli_runner.go:168] Completed: docker exec -t no-preload-20210310204947-6496 dig +short host.docker.internal: (1.3654639s)
	* I0310 21:17:18.396670    8732 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:17:18.407852    8732 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:17:18.491188    8732 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:17:18.709478    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:17:19.378790    8732 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	* I0310 21:17:19.386325    8732 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:17:20.447987    8732 ssh_runner.go:189] Completed: docker images --format :: (1.0603579s)
	* I0310 21:17:20.448261    8732 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	* k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	* k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	* k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 21:17:20.448261    8732 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 21:17:20.456855    8732 ssh_runner.go:149] Run: docker info --format 
	* I0310 21:17:23.429911    8732 ssh_runner.go:189] Completed: docker info --format : (2.9730607s)
	* I0310 21:17:23.429911    8732 cni.go:74] Creating CNI manager for ""
	* I0310 21:17:23.430413    8732 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 21:17:23.430413    8732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 21:17:23.430413    8732 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.7 APIServerPort:8443 KubernetesVersion:v1.20.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210310204947-6496 NodeName:no-preload-20210310204947-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.7 CgroupDriver:cgroupfs ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 21:17:23.430755    8732 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 172.17.0.7
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "no-preload-20210310204947-6496"
	*   kubeletExtraArgs:
	*     node-ip: 172.17.0.7
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "172.17.0.7"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.5-rc.0
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 21:17:23.430755    8732 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20210310204947-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.7
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	* I0310 21:17:23.443186    8732 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.5-rc.0
	* I0310 21:17:23.624956    8732 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 21:17:23.634672    8732 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 21:17:23.823344    8732 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	* I0310 21:17:24.113718    8732 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	* I0310 21:17:24.505332   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:17:24.257875    8732 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1861 bytes)
	* I0310 21:17:24.574354    8732 ssh_runner.go:149] Run: grep 172.17.0.7	control-plane.minikube.internal$ /etc/hosts
	* I0310 21:17:24.668367    8732 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:17:24.855151    8732 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496 for IP: 172.17.0.7
	* I0310 21:17:24.855151    8732 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 21:17:24.855151    8732 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 21:17:24.856305    8732 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\client.key
	* I0310 21:17:24.856305    8732 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.key.d9a465bc
	* I0310 21:17:24.857032    8732 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.key
	* I0310 21:17:24.858417    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 21:17:24.859032    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.859032    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 21:17:24.859452    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.859452    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 21:17:24.859452    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.859967    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 21:17:24.859967    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.859967    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 21:17:24.859967    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.859967    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 21:17:24.861104    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.861264    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 21:17:24.861471    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.861471    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 21:17:24.862056    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.862056    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 21:17:24.862505    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.862505    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 21:17:24.863016    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.863016    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 21:17:24.863016    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.863016    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 21:17:24.863623    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.863623    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 21:17:24.864065    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.864432    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 21:17:24.865057    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.865057    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 21:17:24.865716    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.865716    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 21:17:24.865716    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.865716    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 21:17:24.866744    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.866744    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 21:17:24.867137    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.867496    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 21:17:24.868015    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.868374    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 21:17:24.869117    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.869506    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 21:17:24.869900    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.870271    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 21:17:24.870647    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.871044    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 21:17:24.871429    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.871429    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 21:17:24.871836    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.872233    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 21:17:24.872641    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.873040    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 21:17:24.873492    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.873492    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 21:17:24.874296    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.874296    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 21:17:24.875119    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.875119    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 21:17:24.876115    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.876115    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 21:17:24.876958    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.877370    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 21:17:24.878107    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.878107    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 21:17:24.878954    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.879353    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 21:17:24.879775    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.880164    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 21:17:24.880164    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.880164    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 21:17:24.880836    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.881338    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 21:17:24.881338    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.881338    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 21:17:24.882308    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.882308    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 21:17:24.882308    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.882308    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 21:17:24.883213    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 21:17:24.883213    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 21:17:24.883213    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 21:17:24.883213    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 21:17:24.884288    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 21:17:24.890178    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 21:17:25.463887    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	* I0310 21:17:25.916187    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 21:17:26.596000    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	* I0310 21:17:27.247565    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 21:17:27.727380    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 21:17:28.245596    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 21:17:28.695342    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 21:17:29.090555    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 21:17:29.538220    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 21:17:30.081238    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 21:17:30.579419    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 21:17:30.936103    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 21:17:31.704701    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 21:17:32.311306    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 21:17:32.691685    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 21:17:33.056412    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 21:17:33.574358    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 21:17:34.021596    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 21:17:35.004929   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:17:36.333412   22316 out.go:150]   - Booting up control plane ...
	* I0310 21:17:34.655227    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 21:17:34.993674    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 21:17:35.374491    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 21:17:35.820231    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 21:17:37.441588    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 21:17:38.072462    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 21:17:38.553570    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 21:17:39.814231    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 21:17:40.380385    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 21:17:40.942114    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 21:17:41.438423    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 21:17:41.844693    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 21:17:42.348345    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 21:17:42.689868    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 21:17:43.072124    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 21:17:43.564003    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 21:17:44.093957    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 21:17:45.510786   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:17:44.612040    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 21:17:45.222841    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 21:17:45.487749    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 21:17:45.715522    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 21:17:46.270444    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 21:17:46.794286    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 21:17:47.157428    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 21:17:47.425369    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 21:17:47.889262    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 21:17:48.286577    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 21:17:48.646841    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 21:17:49.198674    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 21:17:49.564687    8732 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 21:17:49.799526    8732 ssh_runner.go:149] Run: openssl version
	* I0310 21:17:49.878950    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 21:17:50.008965    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 21:17:50.104033    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 21:17:50.111411    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 21:17:50.242562    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:50.425671    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 21:17:50.569480    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 21:17:50.692987    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 21:17:50.706949    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 21:17:50.847290    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:51.042047    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 21:17:51.216864    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 21:17:51.288339    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 21:17:51.303418    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 21:17:51.379294    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:51.489776    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 21:17:51.598100    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 21:17:51.692288    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 21:17:51.704092    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 21:17:52.151770    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:52.376465    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 21:17:52.484748    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 21:17:52.552102    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 21:17:52.562772    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 21:17:52.653293    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:52.841863    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 21:17:53.112220    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 21:17:53.209195    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 21:17:53.219547    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 21:17:53.328774    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:53.510092    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 21:17:53.705501    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 21:17:53.813132    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 21:17:53.820609    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 21:17:53.911342    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:54.084246    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 21:17:56.002851   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:17:54.251132    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 21:17:54.316279    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 21:17:54.326302    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 21:17:54.455904    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:54.633441    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 21:17:54.781583    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 21:17:54.845253    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 21:17:54.855009    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 21:17:54.972130    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:55.066770    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 21:17:55.227347    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:17:55.301781    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:17:55.312184    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:17:55.473934    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 21:17:55.785002    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 21:17:55.970018    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 21:17:56.041144    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 21:17:56.051242    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 21:17:56.184413    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:56.290498    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 21:17:56.477984    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 21:17:56.566617    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 21:17:56.577831    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 21:17:56.661072    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:56.772361    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 21:17:57.062894    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 21:17:57.150185    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 21:17:57.160892    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 21:17:57.267435    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:57.538013    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 21:17:57.709472    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 21:17:57.768697    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 21:17:57.777643    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 21:17:57.920234    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:58.115435    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 21:17:58.244679    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 21:17:58.311419    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 21:17:58.325867    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 21:17:58.422450    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:58.544126    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 21:17:58.707878    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 21:17:58.781076    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 21:17:58.794778    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 21:17:58.861929    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:59.003895    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 21:17:59.287644    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 21:17:59.387640    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 21:17:59.397534    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 21:17:59.512503    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 21:17:59.740448    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 21:17:59.927303    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 21:17:59.966882    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 21:17:59.983359    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 21:18:00.061908    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:00.280790    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 21:18:00.479027    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 21:18:00.530960    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 21:18:00.541985    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 21:18:00.654837    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:00.813979    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 21:18:01.042479    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 21:18:01.100422    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 21:18:01.112765    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 21:18:01.290395    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:01.493949    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 21:18:01.617085    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 21:18:01.713538    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 21:18:01.724159    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 21:18:01.872103    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:01.989901    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 21:18:02.171030    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 21:18:02.290900    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 21:18:02.302715    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 21:18:02.410133    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:02.628781    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 21:18:02.814583    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 21:18:02.877937    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 21:18:02.890695    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 21:18:03.065217    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:03.303761    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 21:18:03.518551    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 21:18:03.632581    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 21:18:03.639010    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 21:18:03.750214    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:04.062806    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 21:18:06.503629   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:18:04.342314    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 21:18:04.418886    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 21:18:04.436546    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 21:18:04.661557    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:04.821368    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 21:18:05.141582    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 21:18:05.216180    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 21:18:05.229390    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 21:18:05.330765    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:05.421064    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 21:18:05.689753    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 21:18:05.728861    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 21:18:05.741344    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 21:18:05.832075    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:05.985696    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 21:18:06.138820    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 21:18:06.179160    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 21:18:06.192551    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 21:18:06.263571    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:06.454135    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 21:18:06.601270    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 21:18:06.656614    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 21:18:06.676741    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 21:18:06.816492    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:06.946813    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 21:18:07.091158    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 21:18:07.144633    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 21:18:07.155767    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 21:18:07.282997    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:07.566691    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 21:18:07.844168    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 21:18:07.904456    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 21:18:07.917738    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 21:18:08.131298    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:08.278010    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 21:18:08.502342    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 21:18:08.624227    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 21:18:08.645199    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 21:18:08.925557    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:09.081846    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 21:18:09.193190    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 21:18:09.282968    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 21:18:09.290182    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 21:18:09.516745    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:09.834587    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 21:18:10.002129    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 21:18:10.117049    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 21:18:10.121052    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 21:18:10.205943    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:10.371454    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 21:18:10.530599    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 21:18:10.591304    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 21:18:10.600921    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 21:18:10.768468    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:11.036933    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 21:18:11.221494    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 21:18:11.314385    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 21:18:11.322949    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 21:18:11.481690    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:11.717194    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 21:18:11.949359    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 21:18:12.081537    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 21:18:12.095427    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 21:18:12.231820    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:12.372967    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 21:18:12.542156    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 21:18:12.610908    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 21:18:12.620507    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 21:18:12.689030    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:12.827957    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 21:18:12.971893    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 21:18:13.064924    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 21:18:13.081842    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 21:18:13.271099    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:13.458684    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 21:18:13.665559    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 21:18:13.738200    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 21:18:13.738200    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 21:18:13.869034    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 21:18:13.990128    8732 kubeadm.go:385] StartCluster: {Name:no-preload-20210310204947-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.7 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:18:14.006290    8732 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:18:15.382126    8732 ssh_runner.go:189] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=: (1.3754034s)
	* I0310 21:18:15.394627    8732 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 21:18:15.568029    8732 kubeadm.go:396] found existing configuration files, will attempt cluster restart
	* I0310 21:18:15.568166    8732 kubeadm.go:594] restartCluster start
	* I0310 21:18:15.582546    8732 ssh_runner.go:149] Run: sudo test -d /data/minikube
	* I0310 21:18:15.664083    8732 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 21:18:15.673439    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	* I0310 21:18:16.360732    8732 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210310204947-6496" does not appear in C:\Users\jenkins/.kube/config
	* I0310 21:18:16.362443    8732 kubeconfig.go:128] "no-preload-20210310204947-6496" context is missing from C:\Users\jenkins/.kube/config - will repair!
	* I0310 21:18:16.366049    8732 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:18:16.430378    8732 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	* I0310 21:18:16.575352    8732 api_server.go:146] Checking apiserver status ...
	* I0310 21:18:16.586676    8732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* W0310 21:18:16.871151    8732 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 21:18:16.871317    8732 kubeadm.go:573] needs reconfigure: apiserver in state Stopped
	* I0310 21:18:16.871813    8732 kubeadm.go:1042] stopping kube-system containers ...
	* I0310 21:18:16.873966    8732 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:18:18.011651    8732 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=: (1.1376863s)
	* I0310 21:18:18.011651    8732 docker.go:261] Stopping containers: [f4f5dad286f7 e63ae4a86183 5e2289334650 3c5021469e90 75bbb8211a3e ba5aace99e81 3e2455bc2954 81a39b1bd4f1 920fc93981c0]
	* I0310 21:18:18.015822    8732 ssh_runner.go:149] Run: docker stop f4f5dad286f7 e63ae4a86183 5e2289334650 3c5021469e90 75bbb8211a3e ba5aace99e81 3e2455bc2954 81a39b1bd4f1 920fc93981c0
	* I0310 21:18:18.816664    8732 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	* I0310 21:18:17.002856   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:18:19.284354    8732 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 21:18:19.592163    8732 kubeadm.go:153] found existing configuration files:
	* -rw------- 1 root root 5611 Mar 10 21:07 /etc/kubernetes/admin.conf
	* -rw------- 1 root root 5630 Mar 10 21:07 /etc/kubernetes/controller-manager.conf
	* -rw------- 1 root root 5763 Mar 10 21:07 /etc/kubernetes/kubelet.conf
	* -rw------- 1 root root 5578 Mar 10 21:07 /etc/kubernetes/scheduler.conf
	* 
	* I0310 21:18:19.601745    8732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	* I0310 21:18:19.738385    8732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	* I0310 21:18:19.901348    8732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	* I0310 21:18:20.075659    8732 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 21:18:20.085927    8732 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	* I0310 21:18:20.283137    8732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	* I0310 21:18:20.509954    8732 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* I0310 21:18:20.522107    8732 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	* I0310 21:18:20.672365    8732 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 21:18:20.794925    8732 kubeadm.go:670] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	* I0310 21:18:20.795305    8732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 21:18:26.488531    8732 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": (5.693234s)
	* I0310 21:18:26.488830    8732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 21:18:27.507153   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:18:37.510872   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:18:43.367921    8732 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (16.8789734s)
	* I0310 21:18:43.370172    8732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 21:18:44.676163    7648 docker.go:388] Took 102.428911 seconds to copy over tarball
	* I0310 21:18:44.686842    7648 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:18:00.213990    9056 logs.go:183] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	E0310 21:18:07.212715    9056 out.go:340] unable to execute * 2021-03-10 21:17:17.073375 W | etcdserver: request "header:<ID:12691275820038656336 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.3\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/172.17.0.3\" value_size:65 lease:3467903783183880526 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.3\" > >>" with result "size:16" took too long (251.6557ms) to execute
	: html/template:* 2021-03-10 21:17:17.073375 W | etcdserver: request "header:<ID:12691275820038656336 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.3\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/172.17.0.3\" value_size:65 lease:3467903783183880526 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.3\" > >>" with result "size:16" took too long (251.6557ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:18:07.228683    9056 out.go:340] unable to execute * 2021-03-10 21:17:19.424352 W | etcdserver: request "header:<ID:12691275820038656341 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b173f44c9cd74\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b173f44c9cd74\" value_size:478 lease:3467903783183880476 >> failure:<>>" with result "size:16" took too long (413.846ms) to execute
	: html/template:* 2021-03-10 21:17:19.424352 W | etcdserver: request "header:<ID:12691275820038656341 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b173f44c9cd74\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b173f44c9cd74\" value_size:478 lease:3467903783183880476 >> failure:<>>" with result "size:16" took too long (413.846ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:18:07.236727    9056 out.go:340] unable to execute * 2021-03-10 21:17:20.407193 W | etcdserver: request "header:<ID:12691275820038656342 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496\" mod_revision:298 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496\" value_size:3471 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496\" > >>" with result "size:16" took too long (241.3489ms) to execute
	: html/template:* 2021-03-10 21:17:20.407193 W | etcdserver: request "header:<ID:12691275820038656342 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496\" mod_revision:298 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496\" value_size:3471 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496\" > >>" with result "size:16" took too long (241.3489ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:18:07.261904    9056 out.go:340] unable to execute * 2021-03-10 21:17:27.383987 W | etcdserver: request "header:<ID:12691275820038656364 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.3\" mod_revision:324 > success:<request_put:<key:\"/registry/masterleases/172.17.0.3\" value_size:65 lease:3467903783183880554 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.3\" > >>" with result "size:16" took too long (674.0115ms) to execute
	: html/template:* 2021-03-10 21:17:27.383987 W | etcdserver: request "header:<ID:12691275820038656364 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.3\" mod_revision:324 > success:<request_put:<key:\"/registry/masterleases/172.17.0.3\" value_size:65 lease:3467903783183880554 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.3\" > >>" with result "size:16" took too long (674.0115ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:18:07.284833    9056 out.go:340] unable to execute * 2021-03-10 21:17:28.159893 W | etcdserver: request "header:<ID:12691275820038656368 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b17415877b244\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b17415877b244\" value_size:478 lease:3467903783183880476 >> failure:<>>" with result "size:16" took too long (599.7043ms) to execute
	: html/template:* 2021-03-10 21:17:28.159893 W | etcdserver: request "header:<ID:12691275820038656368 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b17415877b244\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-old-k8s-version-20210310204459-6496.166b17415877b244\" value_size:478 lease:3467903783183880476 >> failure:<>>" with result "size:16" took too long (599.7043ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:18:45.269722    9056 out.go:340] unable to execute * I0310 21:16:33.014798    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:16:33.014798    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:16:33.014798    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:45.290756    9056 out.go:335] unable to parse "* I0310 21:16:33.698382    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}\n": template: * I0310 21:16:33.698382    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:45.325282    9056 out.go:340] unable to execute * I0310 21:16:34.790353    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:16:34.790353    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:16:34.790353    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:45.336137    9056 out.go:335] unable to parse "* I0310 21:16:35.529070    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}\n": template: * I0310 21:16:35.529070    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:45.534144    9056 out.go:335] unable to parse "* I0310 21:16:35.342766    8732 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:16:35.342766    8732 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:18:45.540385    9056 out.go:335] unable to parse "* I0310 21:16:36.447271    8732 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.1045069s)\n": template: * I0310 21:16:36.447271    8732 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1045069s)
	:1: function "json" not defined - returning raw string.
	E0310 21:18:45.746700    9056 out.go:335] unable to parse "* I0310 21:16:37.582610    8732 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:16:37.582610    8732 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:18:45.752973    9056 out.go:335] unable to parse "* I0310 21:16:38.599163    8732 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0164175s)\n": template: * I0310 21:16:38.599163    8732 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0164175s)
	:1: function "json" not defined - returning raw string.
	E0310 21:18:45.787279    9056 out.go:340] unable to execute * I0310 21:16:33.833066   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:16:33.833066   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:16:33.833066   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:45.801214    9056 out.go:335] unable to parse "* I0310 21:16:34.536228   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}\n": template: * I0310 21:16:34.536228   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:46.261799    9056 out.go:340] unable to execute * I0310 21:16:36.721035    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:16:36.721035    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:16:36.721035    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:46.275150    9056 out.go:335] unable to parse "* I0310 21:16:37.370362    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}\n": template: * I0310 21:16:37.370362    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:46.479302    9056 out.go:340] unable to execute * I0310 21:16:40.519859   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:16:40.519859   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:16:40.519859   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:46.491679    9056 out.go:335] unable to parse "* I0310 21:16:41.141051   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}\n": template: * I0310 21:16:41.141051   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:46.623879    9056 out.go:340] unable to execute * I0310 21:16:42.932919   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:16:42.932919   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:16:42.932919   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:46.667767    9056 out.go:340] unable to execute * I0310 21:16:45.688401   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:16:45.688401   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:16:45.688401   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:46.683769    9056 out.go:335] unable to parse "* I0310 21:16:46.318821   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}\n": template: * I0310 21:16:46.318821   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:46.713246    9056 out.go:340] unable to execute * I0310 21:16:46.993797   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:16:46.993797   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:16:46.993797   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:46.730695    9056 out.go:335] unable to parse "* I0310 21:16:47.671142   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}\n": template: * I0310 21:16:47.671142   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:47.093343    9056 out.go:340] unable to execute * I0310 21:16:48.550988   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:16:48.550988   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:16:48.550988   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:47.114315    9056 out.go:340] unable to execute * I0310 21:16:45.044892    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:16:45.044892    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:16:45.044892    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:47.132442    9056 out.go:335] unable to parse "* I0310 21:16:45.694402    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}\n": template: * I0310 21:16:45.694402    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:47.191225    9056 out.go:335] unable to parse "* I0310 21:16:49.188999   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}\n": template: * I0310 21:16:49.188999   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:47.210451    9056 out.go:340] unable to execute * I0310 21:16:52.814403    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:16:52.814403    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:16:52.814403    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:47.219228    9056 out.go:335] unable to parse "* I0310 21:16:53.442200    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}\n": template: * I0310 21:16:53.442200    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:47.565813    9056 out.go:340] unable to execute * I0310 21:16:51.522127    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:16:51.522127    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:16:51.522127    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:47.632141    9056 out.go:340] unable to execute * I0310 21:16:53.881015    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:16:53.881015    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:16:53.881015    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:47.655843    9056 out.go:340] unable to execute * I0310 21:16:55.436603    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:16:55.436603    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:16:55.436603    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:47.664838    9056 out.go:340] unable to execute * I0310 21:16:55.442682    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:16:55.442682    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:16:55.442682    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:47.757225    9056 out.go:340] unable to execute * I0310 21:16:55.595491    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:16:55.595491    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:16:55.595491    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:47.783220    9056 out.go:340] unable to execute * I0310 21:16:57.801250    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:16:57.801250    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:16:57.801250    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:47.794218    9056 out.go:335] unable to parse "* I0310 21:16:58.390912    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}\n": template: * I0310 21:16:58.390912    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:47.823897    9056 out.go:340] unable to execute * I0310 21:16:59.103141    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:16:59.103141    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:16:59.103141    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:47.901215    9056 out.go:340] unable to execute * I0310 21:17:00.771471    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:17:00.771471    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:17:00.771471    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:48.276880    9056 out.go:340] unable to execute * I0310 21:17:01.690857   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:17:01.690857   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:17:01.690857   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:48.348631    9056 out.go:335] unable to parse "* I0310 21:16:59.719292    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}\n": template: * I0310 21:16:59.719292    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:48.768475    9056 out.go:340] unable to execute * I0310 21:17:00.821329    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:17:00.821329    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:17:00.821329    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:48.776488    9056 out.go:335] unable to parse "* I0310 21:17:01.403160    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}\n": template: * I0310 21:17:01.403160    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:18:48.800572    9056 out.go:340] unable to execute * I0310 21:17:02.712857    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:17:02.712857    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:17:02.712857    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:48.975001    9056 out.go:340] unable to execute * I0310 21:17:04.739995   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:17:04.739995   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:17:04.739995   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:49.000308    9056 out.go:340] unable to execute * I0310 21:17:06.734852   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:17:06.734852   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:17:06.734852   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:49.015832    9056 out.go:340] unable to execute * I0310 21:17:06.745255   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:17:06.745255   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:17:06.745255   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:49.069672    9056 out.go:340] unable to execute * I0310 21:17:05.848333    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:17:05.848333    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:17:05.848333    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:49.111588    9056 out.go:340] unable to execute * I0310 21:17:07.856007    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:17:07.856007    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:17:07.856007    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:49.120967    9056 out.go:340] unable to execute * I0310 21:17:07.869292    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:17:07.869292    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:17:07.869292    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:49.285237    9056 out.go:340] unable to execute * I0310 21:17:15.419609   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20210310211603-6496
	: template: * I0310 21:17:15.419609   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20210310211603-6496
	:1:96: executing "* I0310 21:17:15.419609   16712 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" calico-20210310211603-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:49.373593    9056 out.go:340] unable to execute * I0310 21:17:18.709478    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:17:18.709478    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:17:18.709478    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:18:51.227594    9056 out.go:340] unable to execute * I0310 21:18:15.673439    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	: template: * I0310 21:18:15.673439    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	:1:96: executing "* I0310 21:18:15.673439    8732 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" no-preload-20210310204947-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:245: failed logs error: exit status 110
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (970.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (1111.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20210310211113-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p auto-20210310211113-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: exit status 109 (18m30.6903978s)

                                                
                                                
-- stdout --
	* [auto-20210310211113-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	
	
	* Starting control plane node auto-20210310211113-6496 in cluster auto-20210310211113-6496
	* Creating docker container (CPUs=2, Memory=1800MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 21:11:14.293959   12868 out.go:239] Setting OutFile to fd 2832 ...
	I0310 21:11:14.295935   12868 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:11:14.295935   12868 out.go:252] Setting ErrFile to fd 1732...
	I0310 21:11:14.295935   12868 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:11:14.311007   12868 out.go:246] Setting JSON to false
	I0310 21:11:14.313649   12868 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36140,"bootTime":1615374534,"procs":116,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 21:11:14.313649   12868 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 21:11:14.318227   12868 out.go:129] * [auto-20210310211113-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 21:11:14.321152   12868 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 21:11:14.326701   12868 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 21:11:14.853021   12868 docker.go:119] docker version: linux-20.10.2
	I0310 21:11:14.866399   12868 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:11:15.827546   12868 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:80 SystemTime:2021-03-10 21:11:15.3891627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:11:15.831579   12868 out.go:129] * Using the docker driver based on user configuration
	I0310 21:11:15.831579   12868 start.go:276] selected driver: docker
	I0310 21:11:15.832021   12868 start.go:718] validating driver "docker" against <nil>
	I0310 21:11:15.832021   12868 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 21:11:17.750680   12868 out.go:129] 
	W0310 21:11:17.751207   12868 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	W0310 21:11:17.751638   12868 out.go:191] * Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	* Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0310 21:11:17.751638   12868 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* Documentation: https://docs.docker.com/docker-for-windows/#resources
	I0310 21:11:17.754893   12868 out.go:129] 
	I0310 21:11:17.768733   12868 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:11:18.741398   12868 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:80 SystemTime:2021-03-10 21:11:18.3067332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:11:18.742194   12868 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 21:11:18.742530   12868 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 21:11:18.742530   12868 cni.go:74] Creating CNI manager for ""
	I0310 21:11:18.742530   12868 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:11:18.742530   12868 start_flags.go:398] config:
	{Name:auto-20210310211113-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:auto-20210310211113-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:11:18.745763   12868 out.go:129] * Starting control plane node auto-20210310211113-6496 in cluster auto-20210310211113-6496
	I0310 21:11:19.394179   12868 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 21:11:19.394607   12868 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 21:11:19.394607   12868 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:11:19.395018   12868 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:11:19.395018   12868 cache.go:54] Caching tarball of preloaded images
	I0310 21:11:19.395405   12868 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 21:11:19.395405   12868 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 21:11:19.395724   12868 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\config.json ...
	I0310 21:11:19.397974   12868 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\config.json: {Name:mk78dcdd20b3cdf12c117dbadf568d644410e084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:11:19.413960   12868 cache.go:185] Successfully downloaded all kic artifacts
	I0310 21:11:19.414401   12868 start.go:313] acquiring machines lock for auto-20210310211113-6496: {Name:mkdfc0cfe1be3702f6537c64d8e4f147bb34823f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:11:19.415310   12868 start.go:317] acquired machines lock for "auto-20210310211113-6496" in 363.6??s
	I0310 21:11:19.415310   12868 start.go:89] Provisioning new machine with config: &{Name:auto-20210310211113-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:auto-20210310211113-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 21:11:19.415310   12868 start.go:126] createHost starting for "" (driver="docker")
	I0310 21:11:19.418615   12868 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	I0310 21:11:19.419317   12868 start.go:160] libmachine.API.Create for "auto-20210310211113-6496" (driver="docker")
	I0310 21:11:19.419608   12868 client.go:168] LocalClient.Create starting
	I0310 21:11:19.419953   12868 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 21:11:19.420691   12868 main.go:121] libmachine: Decoding PEM data...
	I0310 21:11:19.420691   12868 main.go:121] libmachine: Parsing certificate...
	I0310 21:11:19.421342   12868 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 21:11:19.421618   12868 main.go:121] libmachine: Decoding PEM data...
	I0310 21:11:19.421618   12868 main.go:121] libmachine: Parsing certificate...
	I0310 21:11:19.444715   12868 cli_runner.go:115] Run: docker network inspect auto-20210310211113-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 21:11:20.037265   12868 cli_runner.go:162] docker network inspect auto-20210310211113-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 21:11:20.046274   12868 network_create.go:240] running [docker network inspect auto-20210310211113-6496] to gather additional debugging logs...
	I0310 21:11:20.046274   12868 cli_runner.go:115] Run: docker network inspect auto-20210310211113-6496
	W0310 21:11:20.642100   12868 cli_runner.go:162] docker network inspect auto-20210310211113-6496 returned with exit code 1
	I0310 21:11:20.642100   12868 network_create.go:243] error running [docker network inspect auto-20210310211113-6496]: docker network inspect auto-20210310211113-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210310211113-6496
	I0310 21:11:20.642565   12868 network_create.go:245] output of [docker network inspect auto-20210310211113-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210310211113-6496
	
	** /stderr **
	I0310 21:11:20.650583   12868 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 21:11:21.286851   12868 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 21:11:21.286851   12868 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: auto-20210310211113-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 21:11:21.293847   12868 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210310211113-6496
	W0310 21:11:21.877213   12868 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210310211113-6496 returned with exit code 1
	W0310 21:11:21.877213   12868 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 21:11:21.891599   12868 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 21:11:22.544881   12868 cli_runner.go:115] Run: docker volume create auto-20210310211113-6496 --label name.minikube.sigs.k8s.io=auto-20210310211113-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 21:11:23.174908   12868 oci.go:102] Successfully created a docker volume auto-20210310211113-6496
	I0310 21:11:23.184367   12868 cli_runner.go:115] Run: docker run --rm --name auto-20210310211113-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210310211113-6496 --entrypoint /usr/bin/test -v auto-20210310211113-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 21:11:26.910195   12868 cli_runner.go:168] Completed: docker run --rm --name auto-20210310211113-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210310211113-6496 --entrypoint /usr/bin/test -v auto-20210310211113-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (3.7252355s)
	I0310 21:11:26.910195   12868 oci.go:106] Successfully prepared a docker volume auto-20210310211113-6496
	I0310 21:11:26.910447   12868 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:11:26.910810   12868 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:11:26.910810   12868 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 21:11:26.920297   12868 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:11:26.920678   12868 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210310211113-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	W0310 21:11:27.623841   12868 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210310211113-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 21:11:27.624644   12868 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210310211113-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 21:11:27.904162   12868 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:80 SystemTime:2021-03-10 21:11:27.4826367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:11:27.914552   12868 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 21:11:28.883413   12868 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210310211113-6496 --name auto-20210310211113-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210310211113-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210310211113-6496 --volume auto-20210310211113-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 21:11:32.211049   12868 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210310211113-6496 --name auto-20210310211113-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210310211113-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210310211113-6496 --volume auto-20210310211113-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (3.3270377s)
	I0310 21:11:32.214863   12868 cli_runner.go:115] Run: docker container inspect auto-20210310211113-6496 --format={{.State.Running}}
	I0310 21:11:32.899719   12868 cli_runner.go:115] Run: docker container inspect auto-20210310211113-6496 --format={{.State.Status}}
	I0310 21:11:33.657688   12868 cli_runner.go:115] Run: docker exec auto-20210310211113-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 21:11:35.023560   12868 cli_runner.go:168] Completed: docker exec auto-20210310211113-6496 stat /var/lib/dpkg/alternatives/iptables: (1.3657302s)
	I0310 21:11:35.023560   12868 oci.go:278] the created container "auto-20210310211113-6496" has a running status.
	I0310 21:11:35.023560   12868 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\auto-20210310211113-6496\id_rsa...
	I0310 21:11:35.213781   12868 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\auto-20210310211113-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 21:11:36.570805   12868 cli_runner.go:115] Run: docker container inspect auto-20210310211113-6496 --format={{.State.Status}}
	I0310 21:11:37.246763   12868 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 21:11:37.246763   12868 kic_runner.go:115] Args: [docker exec --privileged auto-20210310211113-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 21:11:39.331070   12868 kic_runner.go:124] Done: [docker exec --privileged auto-20210310211113-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (2.0843104s)
	I0310 21:11:39.334018   12868 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\auto-20210310211113-6496\id_rsa...
	I0310 21:11:40.462629   12868 cli_runner.go:115] Run: docker container inspect auto-20210310211113-6496 --format={{.State.Status}}
	I0310 21:11:41.258703   12868 machine.go:88] provisioning docker machine ...
	I0310 21:11:41.258703   12868 ubuntu.go:169] provisioning hostname "auto-20210310211113-6496"
	I0310 21:11:41.268939   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:11:42.061690   12868 main.go:121] libmachine: Using SSH client type: native
	I0310 21:11:42.083423   12868 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55172 <nil> <nil>}
	I0310 21:11:42.083666   12868 main.go:121] libmachine: About to run SSH command:
	sudo hostname auto-20210310211113-6496 && echo "auto-20210310211113-6496" | sudo tee /etc/hostname
	I0310 21:11:42.100629   12868 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:11:45.109458   12868 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:11:48.123191   12868 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:11:52.324758   12868 main.go:121] libmachine: SSH cmd err, output: <nil>: auto-20210310211113-6496
	
	I0310 21:11:52.332528   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:11:52.976887   12868 main.go:121] libmachine: Using SSH client type: native
	I0310 21:11:52.977311   12868 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55172 <nil> <nil>}
	I0310 21:11:52.977311   12868 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210310211113-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210310211113-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210310211113-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 21:11:54.019636   12868 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:11:54.019901   12868 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 21:11:54.020094   12868 ubuntu.go:177] setting up certificates
	I0310 21:11:54.020094   12868 provision.go:83] configureAuth start
	I0310 21:11:54.029058   12868 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210310211113-6496
	I0310 21:11:54.688299   12868 provision.go:137] copyHostCerts
	I0310 21:11:54.689196   12868 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 21:11:54.689196   12868 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 21:11:54.690021   12868 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 21:11:54.693814   12868 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 21:11:54.694215   12868 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 21:11:54.694215   12868 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 21:11:54.697528   12868 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 21:11:54.697528   12868 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 21:11:54.697798   12868 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 21:11:54.703429   12868 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.auto-20210310211113-6496 san=[172.17.0.5 127.0.0.1 localhost 127.0.0.1 minikube auto-20210310211113-6496]
	I0310 21:11:54.875527   12868 provision.go:165] copyRemoteCerts
	I0310 21:11:54.888245   12868 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 21:11:54.895701   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:11:55.509611   12868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55172 SSHKeyPath:C:\Users\jenkins\.minikube\machines\auto-20210310211113-6496\id_rsa Username:docker}
	I0310 21:11:56.029834   12868 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.1415912s)
	I0310 21:11:56.030549   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 21:11:56.640178   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1245 bytes)
	I0310 21:11:57.427910   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 21:11:57.805861   12868 provision.go:86] duration metric: configureAuth took 3.7857725s
	I0310 21:11:57.805861   12868 ubuntu.go:193] setting minikube options for container-runtime
	I0310 21:11:57.814151   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:11:58.452274   12868 main.go:121] libmachine: Using SSH client type: native
	I0310 21:11:58.453694   12868 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55172 <nil> <nil>}
	I0310 21:11:58.453869   12868 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 21:11:59.319635   12868 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 21:11:59.319635   12868 ubuntu.go:71] root file system type: overlay
	I0310 21:11:59.320337   12868 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 21:11:59.329944   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:11:59.939039   12868 main.go:121] libmachine: Using SSH client type: native
	I0310 21:11:59.939543   12868 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55172 <nil> <nil>}
	I0310 21:11:59.939796   12868 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 21:12:00.666296   12868 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 21:12:00.674661   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:12:01.284174   12868 main.go:121] libmachine: Using SSH client type: native
	I0310 21:12:01.285027   12868 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55172 <nil> <nil>}
	I0310 21:12:01.285339   12868 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 21:12:14.422718   12868 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 21:12:00.650810000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 21:12:14.422996   12868 machine.go:91] provisioned docker machine in 33.1643409s
	I0310 21:12:14.422996   12868 client.go:171] LocalClient.Create took 55.0034691s
	I0310 21:12:14.423386   12868 start.go:168] duration metric: libmachine.API.Create for "auto-20210310211113-6496" took 55.00376s
	I0310 21:12:14.423640   12868 start.go:267] post-start starting for "auto-20210310211113-6496" (driver="docker")
	I0310 21:12:14.423640   12868 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 21:12:14.423640   12868 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 21:12:14.441022   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:12:15.063834   12868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55172 SSHKeyPath:C:\Users\jenkins\.minikube\machines\auto-20210310211113-6496\id_rsa Username:docker}
	I0310 21:12:15.609546   12868 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.1859072s)
	I0310 21:12:15.620424   12868 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 21:12:15.668311   12868 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 21:12:15.668573   12868 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 21:12:15.668925   12868 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 21:12:15.669160   12868 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 21:12:15.669834   12868 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 21:12:15.670891   12868 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 21:12:15.672858   12868 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 21:12:15.673854   12868 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 21:12:15.695622   12868 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 21:12:15.811206   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 21:12:16.110613   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 21:12:16.388703   12868 start.go:270] post-start completed in 1.9646517s
	I0310 21:12:16.440425   12868 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210310211113-6496
	I0310 21:12:17.047608   12868 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\config.json ...
	I0310 21:12:17.076574   12868 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 21:12:17.083114   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:12:17.754460   12868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55172 SSHKeyPath:C:\Users\jenkins\.minikube\machines\auto-20210310211113-6496\id_rsa Username:docker}
	I0310 21:12:18.095963   12868 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.0193899s)
	I0310 21:12:18.096305   12868 start.go:129] duration metric: createHost completed in 58.6809885s
	I0310 21:12:18.096305   12868 start.go:80] releasing machines lock for "auto-20210310211113-6496", held for 58.6810805s
	I0310 21:12:18.108786   12868 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210310211113-6496
	I0310 21:12:18.762862   12868 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 21:12:18.772292   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:12:18.775762   12868 ssh_runner.go:149] Run: systemctl --version
	I0310 21:12:18.785258   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:12:20.843906   12868 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496: (2.0716166s)
	I0310 21:12:20.844402   12868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55172 SSHKeyPath:C:\Users\jenkins\.minikube\machines\auto-20210310211113-6496\id_rsa Username:docker}
	I0310 21:12:20.856353   12868 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210310211113-6496: (2.0708336s)
	I0310 21:12:20.856353   12868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55172 SSHKeyPath:C:\Users\jenkins\.minikube\machines\auto-20210310211113-6496\id_rsa Username:docker}
	I0310 21:12:22.049620   12868 ssh_runner.go:189] Completed: systemctl --version: (3.2738629s)
	I0310 21:12:22.051342   12868 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (3.2884847s)
	I0310 21:12:22.068631   12868 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 21:12:22.255455   12868 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:12:22.410795   12868 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 21:12:22.423121   12868 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 21:12:22.594762   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 21:12:22.911358   12868 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:12:23.060149   12868 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:12:24.868753   12868 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.8086063s)
	I0310 21:12:24.887863   12868 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 21:12:25.089797   12868 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 21:12:25.916698   12868 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 21:12:25.927224   12868 cli_runner.go:115] Run: docker exec -t auto-20210310211113-6496 dig +short host.docker.internal
	I0310 21:12:27.038899   12868 cli_runner.go:168] Completed: docker exec -t auto-20210310211113-6496 dig +short host.docker.internal: (1.1116763s)
	I0310 21:12:27.039057   12868 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 21:12:27.052974   12868 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 21:12:27.119053   12868 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:12:27.242397   12868 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-20210310211113-6496
	I0310 21:12:27.840561   12868 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\client.crt
	I0310 21:12:27.845296   12868 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\client.key
	I0310 21:12:27.848457   12868 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:12:27.848457   12868 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:12:27.855700   12868 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:12:28.653262   12868 docker.go:423] Got preloaded images: 
	I0310 21:12:28.653408   12868 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 21:12:28.668871   12868 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 21:12:28.796389   12868 ssh_runner.go:149] Run: which lz4
	I0310 21:12:28.849600   12868 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 21:12:28.891564   12868 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 21:12:28.891852   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 21:13:55.365896   12868 docker.go:388] Took 86.535985 seconds to copy over tarball
	I0310 21:13:55.386715   12868 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 21:15:16.262099   12868 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1m20.875134s)
	I0310 21:15:16.262099   12868 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 21:15:18.035742   12868 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 21:15:18.100028   12868 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 21:15:18.332605   12868 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:15:19.532325   12868 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.1989992s)
	I0310 21:15:19.543213   12868 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 21:15:32.179001   12868 ssh_runner.go:189] Completed: sudo systemctl restart docker: (12.6358057s)
	I0310 21:15:32.190196   12868 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:15:33.535616   12868 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.3454221s)
	I0310 21:15:33.535616   12868 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:15:33.535616   12868 cache_images.go:73] Images are preloaded, skipping loading
	I0310 21:15:33.554891   12868 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 21:15:35.646473   12868 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (2.0915849s)
	I0310 21:15:35.646910   12868 cni.go:74] Creating CNI manager for ""
	I0310 21:15:35.646910   12868 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:15:35.647199   12868 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 21:15:35.647199   12868 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.5 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210310211113-6496 NodeName:auto-20210310211113-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 21:15:35.648673   12868 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "auto-20210310211113-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 21:15:35.649143   12868 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=auto-20210310211113-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:auto-20210310211113-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 21:15:35.669620   12868 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 21:15:35.785350   12868 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 21:15:35.808369   12868 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 21:15:36.051579   12868 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0310 21:15:36.334508   12868 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 21:15:36.571343   12868 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1850 bytes)
	I0310 21:15:36.948937   12868 ssh_runner.go:149] Run: grep 172.17.0.5	control-plane.minikube.internal$ /etc/hosts
	I0310 21:15:37.018957   12868 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:15:37.415417   12868 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496 for IP: 172.17.0.5
	I0310 21:15:37.416452   12868 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 21:15:37.416929   12868 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 21:15:37.418092   12868 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\client.key
	I0310 21:15:37.418092   12868 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.key.38ada39b
	I0310 21:15:37.418523   12868 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.crt.38ada39b with IP's: [172.17.0.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 21:15:37.646409   12868 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.crt.38ada39b ...
	I0310 21:15:37.646409   12868 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.crt.38ada39b: {Name:mkb35805e886cb541085a428c99168f708c1a0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:15:37.663420   12868 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.key.38ada39b ...
	I0310 21:15:37.670461   12868 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.key.38ada39b: {Name:mkcedaebd1e745ec68df45bb571832d2d3a8a8a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:15:37.687501   12868 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.crt.38ada39b -> C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.crt
	I0310 21:15:37.689343   12868 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.key.38ada39b -> C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.key
	I0310 21:15:37.689343   12868 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\proxy-client.key
	I0310 21:15:37.689343   12868 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\proxy-client.crt with IP's: []
	I0310 21:15:37.834347   12868 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\proxy-client.crt ...
	I0310 21:15:37.834347   12868 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\proxy-client.crt: {Name:mk48d344d37887fa8e8694d5bb474fea16d5ebd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:15:37.848369   12868 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\proxy-client.key ...
	I0310 21:15:37.848369   12868 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\proxy-client.key: {Name:mk95b25a803d0538fb69368bb1b141c1b7e1c177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:15:37.868380   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 21:15:37.868380   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.869374   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 21:15:37.869374   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.869374   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 21:15:37.869374   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.870402   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 21:15:37.870402   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.870402   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 21:15:37.870402   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.871403   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 21:15:37.871403   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.871403   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 21:15:37.871403   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.871403   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 21:15:37.872405   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.872405   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 21:15:37.872405   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.872405   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 21:15:37.873480   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.873480   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 21:15:37.873480   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.873480   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 21:15:37.874365   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.874365   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 21:15:37.874365   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.874365   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 21:15:37.875360   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.875360   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 21:15:37.875360   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.875360   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 21:15:37.876415   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.876415   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 21:15:37.876415   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.877343   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 21:15:37.877343   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.877343   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 21:15:37.877343   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.877343   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 21:15:37.878336   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.878336   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 21:15:37.878336   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.879345   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 21:15:37.879345   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.879345   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 21:15:37.880344   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.880344   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 21:15:37.880344   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.880344   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 21:15:37.881346   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.881346   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 21:15:37.881346   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.881346   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 21:15:37.882545   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.882545   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 21:15:37.882545   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.883361   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 21:15:37.883361   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.883361   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 21:15:37.883361   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.883361   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 21:15:37.883361   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.883361   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 21:15:37.883361   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.885612   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 21:15:37.885896   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.885896   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 21:15:37.885896   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.885896   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 21:15:37.885896   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.885896   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 21:15:37.885896   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.885896   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 21:15:37.885896   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.885896   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 21:15:37.885896   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.885896   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 21:15:37.885896   12868 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 21:15:37.885896   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 21:15:37.885896   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 21:15:37.885896   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 21:15:37.885896   12868 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 21:15:37.900564   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 21:15:38.280511   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 21:15:38.731178   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 21:15:39.143412   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\auto-20210310211113-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 21:15:39.629682   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 21:15:39.989589   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 21:15:40.271406   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 21:15:40.835759   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 21:15:41.264043   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 21:15:41.686561   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 21:15:41.889265   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 21:15:42.071138   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 21:15:42.332370   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 21:15:42.615771   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 21:15:43.025848   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 21:15:43.417445   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 21:15:43.772356   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 21:15:44.111153   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 21:15:44.486387   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 21:15:44.830475   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 21:15:45.247980   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 21:15:45.653731   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 21:15:45.927564   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 21:15:46.169177   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 21:15:46.655581   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 21:15:47.042874   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 21:15:47.399605   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 21:15:47.963175   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 21:15:48.507726   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 21:15:48.794210   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 21:15:49.201390   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 21:15:49.885181   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 21:15:50.256293   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 21:15:50.478174   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 21:15:50.825356   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 21:15:51.218178   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 21:15:51.610375   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 21:15:51.882776   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 21:15:52.213385   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 21:15:52.503481   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 21:15:52.710155   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 21:15:53.102270   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 21:15:53.463124   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 21:15:53.914764   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 21:15:54.463919   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 21:15:54.984202   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 21:15:55.242354   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 21:15:55.711358   12868 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 21:15:55.964872   12868 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 21:15:56.282980   12868 ssh_runner.go:149] Run: openssl version
	I0310 21:15:56.401338   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 21:15:56.537980   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 21:15:56.601352   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 21:15:56.612154   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 21:15:56.660099   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:56.727939   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 21:15:56.809863   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 21:15:56.842473   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 21:15:56.854197   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 21:15:56.897992   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:56.978439   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 21:15:57.068418   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 21:15:57.102824   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 21:15:57.113244   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 21:15:57.162821   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:57.249649   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 21:15:57.352928   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 21:15:57.395466   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 21:15:57.398885   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 21:15:57.475871   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:57.547542   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 21:15:57.639881   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 21:15:57.679881   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 21:15:57.691280   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 21:15:57.758314   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:57.868393   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 21:15:58.050671   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 21:15:58.095519   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 21:15:58.108705   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 21:15:58.249006   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:58.370249   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 21:15:58.624317   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 21:15:58.661939   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 21:15:58.672177   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 21:15:58.747547   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:58.841953   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 21:15:58.937733   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 21:15:58.969380   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 21:15:58.981272   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 21:15:59.077810   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:59.170990   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 21:15:59.254981   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 21:15:59.296719   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 21:15:59.300646   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 21:15:59.363733   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:59.486767   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 21:15:59.615311   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 21:15:59.656922   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 21:15:59.666476   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 21:15:59.751878   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:59.849067   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 21:15:59.973012   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 21:16:00.096795   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 21:16:00.101715   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 21:16:00.204569   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:00.329777   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 21:16:00.466220   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 21:16:00.514440   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 21:16:00.526417   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 21:16:00.683959   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:01.606561   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 21:16:01.717158   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 21:16:01.756577   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 21:16:01.765836   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 21:16:01.854433   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:01.931714   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 21:16:02.114154   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 21:16:02.162030   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 21:16:02.181721   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 21:16:02.257413   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:02.374678   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 21:16:02.550805   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:16:02.579073   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:16:02.598322   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:16:02.660061   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 21:16:02.829156   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 21:16:03.009396   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 21:16:03.079881   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 21:16:03.091239   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 21:16:03.215735   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:03.320952   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 21:16:03.449277   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 21:16:03.505100   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 21:16:03.516372   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 21:16:03.592560   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:03.691916   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 21:16:03.766332   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 21:16:03.823677   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 21:16:03.834680   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 21:16:03.942383   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:04.034117   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 21:16:04.160652   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 21:16:04.197292   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 21:16:04.208963   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 21:16:04.322723   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:04.495787   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 21:16:04.597070   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 21:16:04.638624   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 21:16:04.648464   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 21:16:04.709121   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:04.807822   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 21:16:04.960553   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 21:16:05.052887   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 21:16:05.069353   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 21:16:05.117631   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:05.257032   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 21:16:05.357085   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 21:16:05.388289   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 21:16:05.396377   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 21:16:05.495980   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:05.580342   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 21:16:05.694223   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 21:16:05.829208   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 21:16:05.842568   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 21:16:05.912387   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:06.009082   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 21:16:06.231222   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 21:16:06.279935   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 21:16:06.295080   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 21:16:06.416706   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:06.509256   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 21:16:06.728026   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 21:16:06.794469   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 21:16:06.805268   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 21:16:06.957950   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:07.133024   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 21:16:07.336315   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 21:16:07.390010   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 21:16:07.413693   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 21:16:07.504557   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:07.606289   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 21:16:07.805772   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 21:16:07.859263   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 21:16:07.887804   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 21:16:07.990591   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:08.165452   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 21:16:08.428919   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 21:16:08.463967   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 21:16:08.485728   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 21:16:08.615919   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:08.748057   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 21:16:08.863881   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 21:16:08.909622   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 21:16:08.933499   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 21:16:09.013439   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:09.259861   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 21:16:09.396061   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 21:16:09.478506   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 21:16:09.505179   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 21:16:09.701881   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:09.845588   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 21:16:10.029990   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 21:16:10.063892   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 21:16:10.068474   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 21:16:10.144828   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:10.246206   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 21:16:10.430189   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 21:16:10.529281   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 21:16:10.549459   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 21:16:10.786776   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:10.945484   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 21:16:11.189634   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 21:16:11.227628   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 21:16:11.238518   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 21:16:11.313046   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:11.407698   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 21:16:11.504195   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 21:16:11.576251   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 21:16:11.589657   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 21:16:11.676832   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:11.751309   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 21:16:11.841686   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 21:16:11.882570   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 21:16:11.899935   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 21:16:11.966683   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:12.138439   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 21:16:12.265436   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 21:16:12.315270   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 21:16:12.340339   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 21:16:12.387481   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:12.521006   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 21:16:12.636982   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 21:16:12.708270   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 21:16:12.719757   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 21:16:12.872120   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:12.992830   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 21:16:13.238543   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 21:16:13.271069   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 21:16:13.283324   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 21:16:13.428594   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:13.558468   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 21:16:13.713262   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 21:16:13.829900   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 21:16:13.851374   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 21:16:13.898903   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:14.112173   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 21:16:14.316204   12868 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 21:16:14.388005   12868 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 21:16:14.411847   12868 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 21:16:14.548160   12868 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:14.660496   12868 kubeadm.go:385] StartCluster: {Name:auto-20210310211113-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:auto-20210310211113-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.5 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:16:14.671275   12868 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:16:15.910563   12868 ssh_runner.go:189] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (1.2392891s)
	I0310 21:16:15.922207   12868 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 21:16:16.197056   12868 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 21:16:16.325636   12868 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:16:16.338343   12868 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:16:16.472109   12868 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:16:16.472627   12868 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:16:39.707897   12868 out.go:150]   - Generating certificates and keys ...
	I0310 21:17:11.846711   12868 out.go:150]   - Booting up control plane ...
	W0310 21:21:15.099081   12868 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [auto-20210310211113-6496 localhost] and IPs [172.17.0.5 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [auto-20210310211113-6496 localhost] and IPs [172.17.0.5 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [auto-20210310211113-6496 localhost] and IPs [172.17.0.5 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [auto-20210310211113-6496 localhost] and IPs [172.17.0.5 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0310 21:21:15.100365   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0310 21:22:24.222144   12868 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m9.1217604s)
	I0310 21:22:24.242086   12868 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0310 21:22:24.404126   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:22:24.956986   12868 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:22:24.979600   12868 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:22:25.061114   12868 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:22:25.061487   12868 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:27:04.839019   12868 out.go:150]   - Generating certificates and keys ...
	I0310 21:27:04.845001   12868 out.go:150]   - Booting up control plane ...
	I0310 21:27:04.848370   12868 kubeadm.go:387] StartCluster complete in 10m50.1895672s
	I0310 21:27:04.872853   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 21:27:19.410926   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (14.537744s)
	I0310 21:27:19.411429   12868 logs.go:255] 1 containers: [cc2004a03eb1]
	I0310 21:27:19.424217   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 21:27:34.708337   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (15.2841878s)
	I0310 21:27:34.708337   12868 logs.go:255] 1 containers: [e2b3a62f4f6c]
	I0310 21:27:34.709872   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 21:27:44.956394   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (10.2465659s)
	I0310 21:27:44.957265   12868 logs.go:255] 0 containers: []
	W0310 21:27:44.957265   12868 logs.go:257] No container was found matching "coredns"
	I0310 21:27:44.974057   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 21:27:51.719618   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (6.7455891s)
	I0310 21:27:51.719618   12868 logs.go:255] 1 containers: [94ad33b945b2]
	I0310 21:27:51.728832   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 21:28:03.040596   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (11.3118102s)
	I0310 21:28:03.043969   12868 logs.go:255] 0 containers: []
	W0310 21:28:03.043969   12868 logs.go:257] No container was found matching "kube-proxy"
	I0310 21:28:03.054912   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 21:28:07.820817   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (4.7656492s)
	I0310 21:28:07.820817   12868 logs.go:255] 0 containers: []
	W0310 21:28:07.820817   12868 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 21:28:07.825015   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 21:28:13.265913   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (5.4409192s)
	I0310 21:28:13.265913   12868 logs.go:255] 0 containers: []
	W0310 21:28:13.265913   12868 logs.go:257] No container was found matching "storage-provisioner"
	I0310 21:28:13.278141   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 21:28:18.047812   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (4.7696888s)
	I0310 21:28:18.047812   12868 logs.go:255] 1 containers: [c163207f6927]
	I0310 21:28:18.047812   12868 logs.go:122] Gathering logs for describe nodes ...
	I0310 21:28:18.047812   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 21:28:53.531498   12868 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (35.4838184s)
	I0310 21:28:53.537728   12868 logs.go:122] Gathering logs for kube-apiserver [cc2004a03eb1] ...
	I0310 21:28:53.538061   12868 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cc2004a03eb1"
	I0310 21:29:01.094450   12868 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 cc2004a03eb1": (7.5560726s)
	I0310 21:29:01.122510   12868 logs.go:122] Gathering logs for Docker ...
	I0310 21:29:01.122510   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 21:29:02.121344   12868 logs.go:122] Gathering logs for container status ...
	I0310 21:29:02.121344   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 21:29:06.252243   12868 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (4.1309132s)
	I0310 21:29:06.261201   12868 logs.go:122] Gathering logs for kubelet ...
	I0310 21:29:06.261201   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 21:29:09.148516   12868 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (2.8873241s)
	I0310 21:29:09.208030   12868 logs.go:122] Gathering logs for dmesg ...
	I0310 21:29:09.208030   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 21:29:11.222431   12868 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (2.0144085s)
	I0310 21:29:11.225802   12868 logs.go:122] Gathering logs for etcd [e2b3a62f4f6c] ...
	I0310 21:29:11.225802   12868 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 e2b3a62f4f6c"
	I0310 21:29:20.879403   12868 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 e2b3a62f4f6c": (9.6536333s)
	I0310 21:29:20.923136   12868 logs.go:122] Gathering logs for kube-scheduler [94ad33b945b2] ...
	I0310 21:29:20.923136   12868 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 94ad33b945b2"
	I0310 21:29:33.613387   12868 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 94ad33b945b2": (12.6902925s)
	I0310 21:29:33.637954   12868 logs.go:122] Gathering logs for kube-controller-manager [c163207f6927] ...
	I0310 21:29:33.637954   12868 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 c163207f6927"
	I0310 21:29:44.535102   12868 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 c163207f6927": (10.8971831s)
	W0310 21:29:44.555119   12868 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0310 21:29:44.555950   12868 out.go:191] * 
	* 
	W0310 21:29:44.556695   12868 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 21:29:44.557400   12868 out.go:191] * 
	* 
	W0310 21:29:44.557400   12868 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue:
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	W0310 21:29:44.558018   12868 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:29:44.564027   12868 out.go:129] 
	W0310 21:29:44.564852   12868 out.go:191] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 21:29:44.565473   12868 out.go:191] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0310 21:29:44.565878   12868 out.go:191] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0310 21:29:44.568652   12868 out.go:129] 

                                                
                                                
** /stderr **
net_test.go:82: failed start: exit status 109
--- FAIL: TestNetworkPlugins/group/auto/Start (1111.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (1122.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20210310211211-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p false-20210310211211-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: exit status 109 (18m42.0716825s)

                                                
                                                
-- stdout --
	* [false-20210310211211-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	
	
	* Starting control plane node false-20210310211211-6496 in cluster false-20210310211211-6496
	* Creating docker container (CPUs=2, Memory=1800MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 21:12:12.168049   22316 out.go:239] Setting OutFile to fd 1780 ...
	I0310 21:12:12.169051   22316 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:12:12.170043   22316 out.go:252] Setting ErrFile to fd 1756...
	I0310 21:12:12.170043   22316 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:12:12.186067   22316 out.go:246] Setting JSON to false
	I0310 21:12:12.189054   22316 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36198,"bootTime":1615374534,"procs":118,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 21:12:12.189054   22316 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 21:12:12.202100   22316 out.go:129] * [false-20210310211211-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 21:12:12.207608   22316 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 21:12:12.207608   22316 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 21:12:12.823604   22316 docker.go:119] docker version: linux-20.10.2
	I0310 21:12:12.837463   22316 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:12:13.847908   22316 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0104461s)
	I0310 21:12:13.848997   22316 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:12:13.4086921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:12:13.859444   22316 out.go:129] * Using the docker driver based on user configuration
	I0310 21:12:13.860040   22316 start.go:276] selected driver: docker
	I0310 21:12:13.860040   22316 start.go:718] validating driver "docker" against <nil>
	I0310 21:12:13.860040   22316 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 21:12:14.958856   22316 out.go:129] 
	W0310 21:12:14.958856   22316 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	W0310 21:12:14.960259   22316 out.go:191] * Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	* Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0310 21:12:14.960855   22316 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* Documentation: https://docs.docker.com/docker-for-windows/#resources
	I0310 21:12:14.969901   22316 out.go:129] 
	I0310 21:12:14.983849   22316 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:12:16.091164   22316 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1059288s)
	I0310 21:12:16.091584   22316 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:84 SystemTime:2021-03-10 21:12:15.594244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:12:16.092219   22316 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 21:12:16.092783   22316 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 21:12:16.092783   22316 cni.go:74] Creating CNI manager for "false"
	I0310 21:12:16.092783   22316 start_flags.go:398] config:
	{Name:false-20210310211211-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:false-20210310211211-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:12:16.096235   22316 out.go:129] * Starting control plane node false-20210310211211-6496 in cluster false-20210310211211-6496
	I0310 21:12:16.803587   22316 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 21:12:16.803587   22316 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 21:12:16.805499   22316 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:12:16.806141   22316 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:12:16.806366   22316 cache.go:54] Caching tarball of preloaded images
	I0310 21:12:16.806683   22316 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 21:12:16.806683   22316 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 21:12:16.807176   22316 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\config.json ...
	I0310 21:12:16.807823   22316 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\config.json: {Name:mk3c351e48e390b556f67f601575100dccb787cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:12:16.828882   22316 cache.go:185] Successfully downloaded all kic artifacts
	I0310 21:12:16.832148   22316 start.go:313] acquiring machines lock for false-20210310211211-6496: {Name:mk882cfa8d9238f65b29053b9e665c3ec6b48f76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:16.833808   22316 start.go:317] acquired machines lock for "false-20210310211211-6496" in 1.3636ms
	I0310 21:12:16.833956   22316 start.go:89] Provisioning new machine with config: &{Name:false-20210310211211-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:false-20210310211211-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 21:12:16.834394   22316 start.go:126] createHost starting for "" (driver="docker")
	I0310 21:12:16.837716   22316 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	I0310 21:12:16.838860   22316 start.go:160] libmachine.API.Create for "false-20210310211211-6496" (driver="docker")
	I0310 21:12:16.839277   22316 client.go:168] LocalClient.Create starting
	I0310 21:12:16.840002   22316 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 21:12:16.840002   22316 main.go:121] libmachine: Decoding PEM data...
	I0310 21:12:16.840002   22316 main.go:121] libmachine: Parsing certificate...
	I0310 21:12:16.840730   22316 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 21:12:16.840730   22316 main.go:121] libmachine: Decoding PEM data...
	I0310 21:12:16.840730   22316 main.go:121] libmachine: Parsing certificate...
	I0310 21:12:16.858252   22316 cli_runner.go:115] Run: docker network inspect false-20210310211211-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 21:12:17.504248   22316 cli_runner.go:162] docker network inspect false-20210310211211-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 21:12:17.522363   22316 network_create.go:240] running [docker network inspect false-20210310211211-6496] to gather additional debugging logs...
	I0310 21:12:17.522363   22316 cli_runner.go:115] Run: docker network inspect false-20210310211211-6496
	W0310 21:12:18.215401   22316 cli_runner.go:162] docker network inspect false-20210310211211-6496 returned with exit code 1
	I0310 21:12:18.215961   22316 network_create.go:243] error running [docker network inspect false-20210310211211-6496]: docker network inspect false-20210310211211-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20210310211211-6496
	I0310 21:12:18.215961   22316 network_create.go:245] output of [docker network inspect false-20210310211211-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20210310211211-6496
	
	** /stderr **
	I0310 21:12:18.225387   22316 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 21:12:18.886497   22316 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 21:12:18.886676   22316 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: false-20210310211211-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 21:12:19.373071   22316 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20210310211211-6496
	W0310 21:12:20.987006   22316 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20210310211211-6496 returned with exit code 1
	I0310 21:12:20.987006   22316 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20210310211211-6496: (1.6139369s)
	W0310 21:12:20.988648   22316 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 21:12:21.030124   22316 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 21:12:21.800001   22316 cli_runner.go:115] Run: docker volume create false-20210310211211-6496 --label name.minikube.sigs.k8s.io=false-20210310211211-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 21:12:22.502275   22316 oci.go:102] Successfully created a docker volume false-20210310211211-6496
	I0310 21:12:22.509031   22316 cli_runner.go:115] Run: docker run --rm --name false-20210310211211-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20210310211211-6496 --entrypoint /usr/bin/test -v false-20210310211211-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 21:12:27.939922   22316 cli_runner.go:168] Completed: docker run --rm --name false-20210310211211-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20210310211211-6496 --entrypoint /usr/bin/test -v false-20210310211211-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (5.4308987s)
	I0310 21:12:27.939922   22316 oci.go:106] Successfully prepared a docker volume false-20210310211211-6496
	I0310 21:12:27.941083   22316 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:12:27.941771   22316 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:12:27.942445   22316 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 21:12:27.951736   22316 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20210310211211-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	I0310 21:12:27.952737   22316 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W0310 21:12:28.642073   22316 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20210310211211-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 21:12:28.642909   22316 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20210310211211-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 21:12:29.011889   22316 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0591535s)
	I0310 21:12:29.011889   22316 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:77 SystemTime:2021-03-10 21:12:28.5591688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:12:29.022435   22316 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 21:12:30.007081   22316 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20210310211211-6496 --name false-20210310211211-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20210310211211-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20210310211211-6496 --volume false-20210310211211-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 21:12:34.782447   22316 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20210310211211-6496 --name false-20210310211211-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20210310211211-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20210310211211-6496 --volume false-20210310211211-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (4.7747873s)
	I0310 21:12:34.792498   22316 cli_runner.go:115] Run: docker container inspect false-20210310211211-6496 --format={{.State.Running}}
	I0310 21:12:35.485526   22316 cli_runner.go:115] Run: docker container inspect false-20210310211211-6496 --format={{.State.Status}}
	I0310 21:12:36.126901   22316 cli_runner.go:115] Run: docker exec false-20210310211211-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 21:12:38.378188   22316 cli_runner.go:168] Completed: docker exec false-20210310211211-6496 stat /var/lib/dpkg/alternatives/iptables: (2.2512901s)
	I0310 21:12:38.378492   22316 oci.go:278] the created container "false-20210310211211-6496" has a running status.
	I0310 21:12:38.378492   22316 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa...
	I0310 21:12:38.625289   22316 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 21:12:39.842959   22316 cli_runner.go:115] Run: docker container inspect false-20210310211211-6496 --format={{.State.Status}}
	I0310 21:12:40.568230   22316 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 21:12:40.568230   22316 kic_runner.go:115] Args: [docker exec --privileged false-20210310211211-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 21:12:42.551198   22316 kic_runner.go:124] Done: [docker exec --privileged false-20210310211211-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.9829709s)
	I0310 21:12:42.563519   22316 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa...
	I0310 21:12:43.412980   22316 cli_runner.go:115] Run: docker container inspect false-20210310211211-6496 --format={{.State.Status}}
	I0310 21:12:44.102425   22316 machine.go:88] provisioning docker machine ...
	I0310 21:12:44.102912   22316 ubuntu.go:169] provisioning hostname "false-20210310211211-6496"
	I0310 21:12:44.115201   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:12:44.767257   22316 main.go:121] libmachine: Using SSH client type: native
	I0310 21:12:44.768085   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	I0310 21:12:44.768232   22316 main.go:121] libmachine: About to run SSH command:
	sudo hostname false-20210310211211-6496 && echo "false-20210310211211-6496" | sudo tee /etc/hostname
	I0310 21:12:44.780818   22316 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:12:50.315712   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: false-20210310211211-6496
	
	I0310 21:12:50.336532   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:12:51.020432   22316 main.go:121] libmachine: Using SSH client type: native
	I0310 21:12:51.030067   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	I0310 21:12:51.030067   22316 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-20210310211211-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20210310211211-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-20210310211211-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 21:12:53.779092   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:12:53.780765   22316 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 21:12:53.780765   22316 ubuntu.go:177] setting up certificates
	I0310 21:12:53.781192   22316 provision.go:83] configureAuth start
	I0310 21:12:53.805343   22316 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20210310211211-6496
	I0310 21:12:54.440272   22316 provision.go:137] copyHostCerts
	I0310 21:12:54.440272   22316 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 21:12:54.440798   22316 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 21:12:54.441178   22316 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 21:12:54.445616   22316 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 21:12:54.445616   22316 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 21:12:54.446282   22316 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 21:12:54.449747   22316 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 21:12:54.449987   22316 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 21:12:54.450644   22316 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 21:12:54.455779   22316 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.false-20210310211211-6496 san=[172.17.0.8 127.0.0.1 localhost 127.0.0.1 minikube false-20210310211211-6496]
	I0310 21:12:54.748978   22316 provision.go:165] copyRemoteCerts
	I0310 21:12:54.766034   22316 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 21:12:54.774187   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:12:55.399153   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	I0310 21:12:56.240137   22316 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4741049s)
	I0310 21:12:56.240137   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 21:12:56.735057   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0310 21:12:57.353881   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0310 21:12:58.233074   22316 provision.go:86] duration metric: configureAuth took 4.4518884s
	I0310 21:12:58.233074   22316 ubuntu.go:193] setting minikube options for container-runtime
	I0310 21:12:58.267262   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:12:59.006727   22316 main.go:121] libmachine: Using SSH client type: native
	I0310 21:12:59.007814   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	I0310 21:12:59.007814   22316 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 21:13:00.406956   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 21:13:00.406956   22316 ubuntu.go:71] root file system type: overlay
	I0310 21:13:00.407166   22316 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 21:13:00.416655   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:13:01.049891   22316 main.go:121] libmachine: Using SSH client type: native
	I0310 21:13:01.050764   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	I0310 21:13:01.050764   22316 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 21:13:02.963365   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 21:13:02.971839   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:13:03.615956   22316 main.go:121] libmachine: Using SSH client type: native
	I0310 21:13:03.617620   22316 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55178 <nil> <nil>}
	I0310 21:13:03.617911   22316 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 21:13:25.623756   22316 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 21:13:02.944722000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 21:13:25.624097   22316 machine.go:91] provisioned docker machine in 41.5217304s
	I0310 21:13:25.624097   22316 client.go:171] LocalClient.Create took 1m8.7849166s
	I0310 21:13:25.624097   22316 start.go:168] duration metric: libmachine.API.Create for "false-20210310211211-6496" took 1m8.7853333s
	I0310 21:13:25.624097   22316 start.go:267] post-start starting for "false-20210310211211-6496" (driver="docker")
	I0310 21:13:25.624097   22316 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 21:13:25.634099   22316 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 21:13:25.645133   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:13:26.319049   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	I0310 21:13:26.965777   22316 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3316798s)
	I0310 21:13:26.979571   22316 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 21:13:27.041034   22316 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 21:13:27.041595   22316 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 21:13:27.041595   22316 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 21:13:27.041595   22316 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 21:13:27.041595   22316 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 21:13:27.042315   22316 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 21:13:27.044801   22316 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 21:13:27.045903   22316 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 21:13:27.058807   22316 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 21:13:27.282537   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 21:13:27.708690   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 21:13:28.446877   22316 start.go:270] post-start completed in 2.8227839s
	I0310 21:13:28.512700   22316 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20210310211211-6496
	I0310 21:13:29.190137   22316 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\config.json ...
	I0310 21:13:29.242845   22316 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 21:13:29.248980   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:13:29.940534   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	I0310 21:13:30.418039   22316 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1751962s)
	I0310 21:13:30.418512   22316 start.go:129] duration metric: createHost completed in 1m13.5837487s
	I0310 21:13:30.418512   22316 start.go:80] releasing machines lock for "false-20210310211211-6496", held for 1m13.5848074s
	I0310 21:13:30.429055   22316 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20210310211211-6496
	I0310 21:13:31.056336   22316 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 21:13:31.069740   22316 ssh_runner.go:149] Run: systemctl --version
	I0310 21:13:31.070658   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:13:31.078759   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:13:31.725436   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	I0310 21:13:31.782793   22316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55178 SSHKeyPath:C:\Users\jenkins\.minikube\machines\false-20210310211211-6496\id_rsa Username:docker}
	I0310 21:13:32.747305   22316 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6907746s)
	I0310 21:13:32.752679   22316 ssh_runner.go:189] Completed: systemctl --version: (1.6775676s)
	I0310 21:13:32.761954   22316 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 21:13:33.006142   22316 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:13:33.313909   22316 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 21:13:33.324754   22316 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 21:13:33.500636   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 21:13:33.933231   22316 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:13:34.261595   22316 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:13:36.141117   22316 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.879256s)
	I0310 21:13:36.146867   22316 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 21:13:36.334513   22316 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 21:13:37.715953   22316 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (1.3814424s)
	I0310 21:13:37.720657   22316 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 21:13:37.728327   22316 cli_runner.go:115] Run: docker exec -t false-20210310211211-6496 dig +short host.docker.internal
	I0310 21:13:39.192283   22316 cli_runner.go:168] Completed: docker exec -t false-20210310211211-6496 dig +short host.docker.internal: (1.4637323s)
	I0310 21:13:39.192283   22316 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 21:13:39.207998   22316 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 21:13:39.285432   22316 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:13:39.488453   22316 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20210310211211-6496
	I0310 21:13:40.124884   22316 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\client.crt
	I0310 21:13:40.135955   22316 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\client.key
	I0310 21:13:40.135955   22316 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:13:40.135955   22316 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:13:40.149276   22316 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:13:40.871022   22316 docker.go:423] Got preloaded images: 
	I0310 21:13:40.871022   22316 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 21:13:40.887557   22316 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 21:13:41.059625   22316 ssh_runner.go:149] Run: which lz4
	I0310 21:13:41.168560   22316 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 21:13:41.235262   22316 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 21:13:41.235667   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 21:14:57.278415   22316 docker.go:388] Took 76.118939 seconds to copy over tarball
	I0310 21:14:57.286692   22316 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 21:15:44.654316   22316 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (47.3648692s)
	I0310 21:15:44.654626   22316 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 21:15:46.257704   22316 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 21:15:46.378564   22316 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 21:15:46.520927   22316 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:15:47.791789   22316 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.2699045s)
	I0310 21:15:47.802048   22316 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 21:15:55.360698   22316 ssh_runner.go:189] Completed: sudo systemctl restart docker: (7.5586611s)
	I0310 21:15:55.383317   22316 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:15:56.537980   22316 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.1546645s)
	I0310 21:15:56.538447   22316 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:15:56.538447   22316 cache_images.go:73] Images are preloaded, skipping loading
	I0310 21:15:56.553326   22316 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 21:15:58.761711   22316 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (2.208097s)
	I0310 21:15:58.761985   22316 cni.go:74] Creating CNI manager for "false"
	I0310 21:15:58.761985   22316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 21:15:58.761985   22316 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.8 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-20210310211211-6496 NodeName:false-20210310211211-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 21:15:58.762366   22316 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "false-20210310211211-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 21:15:58.762366   22316 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=false-20210310211211-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:false-20210310211211-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:}
	I0310 21:15:58.784543   22316 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 21:15:58.904798   22316 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 21:15:58.931391   22316 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 21:15:59.068579   22316 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I0310 21:15:59.277276   22316 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 21:15:59.616684   22316 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1851 bytes)
	I0310 21:15:59.955856   22316 ssh_runner.go:149] Run: grep 172.17.0.8	control-plane.minikube.internal$ /etc/hosts
	I0310 21:15:59.992826   22316 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:16:00.335181   22316 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496 for IP: 172.17.0.8
	I0310 21:16:00.336193   22316 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 21:16:00.336666   22316 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 21:16:00.337991   22316 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\client.key
	I0310 21:16:00.338223   22316 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.key.f7ca08ce
	I0310 21:16:00.338223   22316 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.crt.f7ca08ce with IP's: [172.17.0.8 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 21:16:00.892461   22316 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.crt.f7ca08ce ...
	I0310 21:16:00.892461   22316 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.crt.f7ca08ce: {Name:mk87ec5c9c987ddc558c075fe77d6cbe0e88e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:16:00.902464   22316 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.key.f7ca08ce ...
	I0310 21:16:00.903460   22316 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.key.f7ca08ce: {Name:mkaa44f89c1da1f289da5b3ee361f5b8ef09c837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:16:00.911473   22316 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.crt.f7ca08ce -> C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.crt
	I0310 21:16:00.914463   22316 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.key.f7ca08ce -> C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.key
	I0310 21:16:00.916467   22316 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\proxy-client.key
	I0310 21:16:00.916467   22316 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\proxy-client.crt with IP's: []
	I0310 21:16:01.131466   22316 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\proxy-client.crt ...
	I0310 21:16:01.131466   22316 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\proxy-client.crt: {Name:mk1aba3dfb9ee46373495ca9e5bd81dd7632e408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:16:01.140524   22316 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\proxy-client.key ...
	I0310 21:16:01.140524   22316 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\proxy-client.key: {Name:mkc0c6ea56cd2bac69d8557e54783de8139aed93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:16:01.149544   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 21:16:01.149544   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.149544   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 21:16:01.149544   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.150493   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 21:16:01.150493   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.150493   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 21:16:01.150493   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.150493   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 21:16:01.150493   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.150493   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 21:16:01.151484   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.151484   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 21:16:01.151484   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.151484   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 21:16:01.151484   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.152470   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 21:16:01.152470   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.152470   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 21:16:01.152470   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.152470   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 21:16:01.152470   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.152470   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 21:16:01.152470   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.152470   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 21:16:01.152470   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.152470   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 21:16:01.154689   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.154689   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 21:16:01.154689   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.154689   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 21:16:01.155474   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.155474   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 21:16:01.155474   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.155474   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 21:16:01.155474   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.155474   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 21:16:01.156478   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.156478   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 21:16:01.156478   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.156478   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 21:16:01.156478   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.157479   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 21:16:01.157479   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.157479   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 21:16:01.157479   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.157479   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 21:16:01.157479   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.158478   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 21:16:01.158478   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.158478   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 21:16:01.158478   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.158478   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 21:16:01.159484   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.159484   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 21:16:01.159484   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.159484   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 21:16:01.159484   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.160481   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 21:16:01.160481   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.160481   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 21:16:01.160481   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.160481   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 21:16:01.160481   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.161478   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 21:16:01.161478   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.161478   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 21:16:01.161478   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.161478   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 21:16:01.162481   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.162481   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 21:16:01.162481   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.162481   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 21:16:01.162481   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.162481   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 21:16:01.163480   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.163480   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 21:16:01.163480   22316 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 21:16:01.163480   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 21:16:01.163480   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 21:16:01.164481   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 21:16:01.164481   22316 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 21:16:01.170472   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 21:16:02.059255   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0310 21:16:02.495768   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 21:16:03.129487   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\false-20210310211211-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 21:16:03.665827   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 21:16:04.116710   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 21:16:04.453956   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 21:16:04.904573   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 21:16:05.213723   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 21:16:05.558180   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 21:16:05.919639   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 21:16:06.185722   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 21:16:06.509047   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 21:16:06.933993   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 21:16:07.390010   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 21:16:07.711811   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 21:16:08.175162   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 21:16:08.601497   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 21:16:09.316059   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 21:16:09.728086   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 21:16:10.141482   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 21:16:10.563634   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 21:16:10.897058   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 21:16:11.496042   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 21:16:11.804386   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 21:16:12.161514   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 21:16:12.452767   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 21:16:12.683659   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 21:16:13.142849   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 21:16:13.658314   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 21:16:14.094822   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 21:16:14.840166   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 21:16:15.401747   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 21:16:15.856183   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 21:16:16.386878   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 21:16:16.737719   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 21:16:17.201182   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 21:16:17.603511   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 21:16:18.096082   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 21:16:18.601729   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 21:16:18.980582   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 21:16:19.167159   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 21:16:19.359301   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 21:16:19.678098   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 21:16:20.026042   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 21:16:20.450526   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 21:16:20.881469   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 21:16:21.183957   22316 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 21:16:21.486504   22316 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 21:16:21.857617   22316 ssh_runner.go:149] Run: openssl version
	I0310 21:16:21.904540   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 21:16:22.142283   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 21:16:22.330065   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 21:16:22.345400   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 21:16:22.494853   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:22.591507   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 21:16:22.704299   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 21:16:22.744441   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 21:16:22.757582   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 21:16:22.844296   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:23.035540   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 21:16:23.148285   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 21:16:23.191200   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 21:16:23.214226   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 21:16:23.261073   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:23.334336   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 21:16:23.419129   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 21:16:23.448063   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 21:16:23.467895   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 21:16:23.543838   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:23.656585   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 21:16:23.760754   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 21:16:23.838187   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 21:16:23.849606   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 21:16:23.902334   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:23.955475   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 21:16:24.021830   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 21:16:24.068069   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 21:16:24.087598   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 21:16:24.152434   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:24.228714   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 21:16:24.289587   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 21:16:24.350757   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 21:16:24.359981   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 21:16:24.428885   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:24.614146   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 21:16:24.734759   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 21:16:24.820438   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 21:16:24.833727   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 21:16:24.887090   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:25.002802   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 21:16:25.139158   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 21:16:25.215077   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 21:16:25.226856   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 21:16:25.329399   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:25.415679   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 21:16:25.506268   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 21:16:25.552563   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 21:16:25.554238   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 21:16:25.619649   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:25.686537   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 21:16:25.856725   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 21:16:25.886792   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 21:16:25.907193   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 21:16:25.975930   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:26.225991   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 21:16:26.345109   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 21:16:26.388336   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 21:16:26.404723   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 21:16:26.450876   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:26.549596   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 21:16:26.635506   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 21:16:26.696961   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 21:16:26.720649   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 21:16:26.783653   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:26.870395   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 21:16:27.009587   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 21:16:27.088799   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 21:16:27.096950   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 21:16:27.191326   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:27.271357   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 21:16:27.400327   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 21:16:27.458333   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 21:16:27.474366   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 21:16:27.546117   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:27.642965   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 21:16:27.759298   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 21:16:27.810452   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 21:16:27.818502   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 21:16:27.919619   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:28.007929   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 21:16:28.193743   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 21:16:28.252055   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 21:16:28.284643   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 21:16:28.409623   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:28.558866   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 21:16:28.661796   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 21:16:28.724420   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 21:16:28.742363   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 21:16:28.943326   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:29.055006   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 21:16:29.315692   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 21:16:29.429587   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 21:16:29.439974   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 21:16:29.575079   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:29.752125   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 21:16:29.956107   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 21:16:29.986615   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 21:16:30.011272   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 21:16:30.139636   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:30.349874   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 21:16:30.694076   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 21:16:30.771628   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 21:16:30.784428   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 21:16:30.864524   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:30.977473   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 21:16:31.076682   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 21:16:31.113503   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 21:16:31.124461   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 21:16:31.217729   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:31.333346   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 21:16:31.483269   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 21:16:31.526373   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 21:16:31.537219   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 21:16:31.596598   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:31.718622   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 21:16:31.921183   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 21:16:31.959516   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 21:16:31.971027   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 21:16:32.024377   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:32.136900   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 21:16:32.281294   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 21:16:32.336267   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 21:16:32.352879   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 21:16:32.446280   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:32.558295   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 21:16:32.792738   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 21:16:32.836791   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 21:16:32.858509   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 21:16:32.952556   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:33.186783   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 21:16:33.346011   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 21:16:33.397559   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 21:16:33.416230   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 21:16:33.487744   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:33.617189   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 21:16:33.830275   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 21:16:33.935207   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 21:16:33.944448   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 21:16:34.041522   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:34.233811   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 21:16:34.372370   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 21:16:34.509401   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 21:16:34.518413   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 21:16:34.702785   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:34.799350   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 21:16:34.926033   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 21:16:34.966995   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 21:16:34.966995   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 21:16:35.120646   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:35.249798   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 21:16:35.439454   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 21:16:35.491670   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 21:16:35.511821   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 21:16:35.609657   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:35.765770   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 21:16:35.933136   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 21:16:36.038232   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 21:16:36.049260   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 21:16:36.111357   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:36.202159   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 21:16:36.321397   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 21:16:36.435742   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 21:16:36.447025   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 21:16:36.526293   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:36.641234   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 21:16:36.760565   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 21:16:36.803836   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 21:16:36.810661   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 21:16:36.894168   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:36.975012   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 21:16:37.145547   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 21:16:37.232541   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 21:16:37.256191   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 21:16:37.378101   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:37.467169   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 21:16:37.533453   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 21:16:37.656621   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 21:16:37.667571   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 21:16:37.779938   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:37.933898   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 21:16:38.696320   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 21:16:38.736172   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 21:16:38.757427   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 21:16:38.827599   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:38.987670   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 21:16:39.135835   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:16:39.180960   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:16:39.192268   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:16:39.231265   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 21:16:39.505315   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 21:16:39.730955   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 21:16:39.790291   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 21:16:39.864490   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 21:16:39.937802   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:40.039568   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 21:16:40.151587   22316 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 21:16:40.196616   22316 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 21:16:40.224510   22316 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 21:16:40.281488   22316 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 21:16:40.379387   22316 kubeadm.go:385] StartCluster: {Name:false-20210310211211-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:false-20210310211211-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.8 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:16:40.399115   22316 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:16:41.208974   22316 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 21:16:41.314735   22316 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 21:16:41.445294   22316 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:16:41.456004   22316 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:16:41.543999   22316 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:16:41.544273   22316 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:16:57.667209   22316 out.go:150]   - Generating certificates and keys ...
	I0310 21:17:36.333412   22316 out.go:150]   - Booting up control plane ...
	W0310 21:21:51.765464   22316 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [false-20210310211211-6496 localhost] and IPs [172.17.0.8 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [false-20210310211211-6496 localhost] and IPs [172.17.0.8 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [false-20210310211211-6496 localhost] and IPs [172.17.0.8 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [false-20210310211211-6496 localhost] and IPs [172.17.0.8 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0310 21:21:51.766143   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0310 21:23:32.778429   22316 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m41.0124161s)
	I0310 21:23:32.791925   22316 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0310 21:23:33.067911   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:23:33.988729   22316 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:23:34.003738   22316 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:23:35.434316   22316 ssh_runner.go:189] Completed: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: (1.4305796s)
	I0310 21:23:35.434613   22316 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:23:35.434613   22316 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:28:03.491179   22316 out.go:150]   - Generating certificates and keys ...
	I0310 21:28:03.759359   22316 out.go:150]   - Booting up control plane ...
	I0310 21:28:03.763053   22316 kubeadm.go:387] StartCluster complete in 11m23.3855813s
	I0310 21:28:03.770283   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 21:28:10.599387   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (6.8291317s)
	I0310 21:28:10.599387   22316 logs.go:255] 1 containers: [549dc83d86e2]
	I0310 21:28:10.610346   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 21:28:16.277369   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (5.6669546s)
	I0310 21:28:16.278011   22316 logs.go:255] 1 containers: [e9fcf0291799]
	I0310 21:28:16.297321   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 21:28:27.192991   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (10.8957117s)
	I0310 21:28:27.193742   22316 logs.go:255] 0 containers: []
	W0310 21:28:27.194173   22316 logs.go:257] No container was found matching "coredns"
	I0310 21:28:27.211606   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 21:28:36.841716   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (9.629928s)
	I0310 21:28:36.841716   22316 logs.go:255] 1 containers: [1f40f04d70b6]
	I0310 21:28:36.849449   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 21:28:44.938954   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (8.0893686s)
	I0310 21:28:44.938954   22316 logs.go:255] 0 containers: []
	W0310 21:28:44.938954   22316 logs.go:257] No container was found matching "kube-proxy"
	I0310 21:28:44.948793   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 21:28:53.291611   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (8.3426386s)
	I0310 21:28:53.291921   22316 logs.go:255] 0 containers: []
	W0310 21:28:53.291921   22316 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 21:28:53.306419   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 21:29:00.234308   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (6.9279134s)
	I0310 21:29:00.234308   22316 logs.go:255] 0 containers: []
	W0310 21:29:00.234308   22316 logs.go:257] No container was found matching "storage-provisioner"
	I0310 21:29:00.243752   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 21:29:03.568487   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (3.3247464s)
	I0310 21:29:03.569010   22316 logs.go:255] 1 containers: [69707ea57db5]
	I0310 21:29:03.569010   22316 logs.go:122] Gathering logs for etcd [e9fcf0291799] ...
	I0310 21:29:03.569010   22316 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 e9fcf0291799"
	I0310 21:29:06.228059   22316 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 e9fcf0291799": (2.6590572s)
	I0310 21:29:06.255715   22316 logs.go:122] Gathering logs for kube-scheduler [1f40f04d70b6] ...
	I0310 21:29:06.256016   22316 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 1f40f04d70b6"
	I0310 21:29:17.205650   22316 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 1f40f04d70b6": (10.9496715s)
	I0310 21:29:17.231614   22316 logs.go:122] Gathering logs for Docker ...
	I0310 21:29:17.231614   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 21:29:19.255764   22316 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (2.0241573s)
	I0310 21:29:19.265019   22316 logs.go:122] Gathering logs for container status ...
	I0310 21:29:19.265204   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 21:29:22.342159   22316 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (3.0769649s)
	I0310 21:29:22.342741   22316 logs.go:122] Gathering logs for kubelet ...
	I0310 21:29:22.342741   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 21:29:29.185122   22316 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (6.8424038s)
	I0310 21:29:29.253630   22316 logs.go:122] Gathering logs for dmesg ...
	I0310 21:29:29.254626   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 21:29:33.176688   22316 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (3.9220747s)
	I0310 21:29:33.180689   22316 logs.go:122] Gathering logs for describe nodes ...
	I0310 21:29:33.180689   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 21:30:40.299495   22316 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m7.1190077s)
	I0310 21:30:40.303232   22316 logs.go:122] Gathering logs for kube-apiserver [549dc83d86e2] ...
	I0310 21:30:40.303232   22316 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 549dc83d86e2"
	I0310 21:30:48.124398   22316 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 549dc83d86e2": (7.8210494s)
	I0310 21:30:48.158526   22316 logs.go:122] Gathering logs for kube-controller-manager [69707ea57db5] ...
	I0310 21:30:48.158526   22316 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 69707ea57db5"
	I0310 21:30:52.270015   22316 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 69707ea57db5": (4.1115006s)
	W0310 21:30:52.565103   22316 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0310 21:30:52.565244   22316 out.go:191] * 
	* 
	W0310 21:30:52.565689   22316 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 21:30:52.565889   22316 out.go:191] * 
	* 
	W0310 21:30:52.565889   22316 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue:
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	W0310 21:30:52.565889   22316 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:30:53.287049   22316 out.go:129] 
	W0310 21:30:53.288273   22316 out.go:191] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 21:30:53.289343   22316 out.go:191] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0310 21:30:53.289343   22316 out.go:191] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0310 21:30:53.632029   22316 out.go:129] 

                                                
                                                
** /stderr **
net_test.go:82: failed start: exit status 109
--- FAIL: TestNetworkPlugins/group/false/Start (1122.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (735.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20210310205017-6496 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.20.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p embed-certs-20210310205017-6496 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.20.2: exit status 1 (7m36.3644943s)

                                                
                                                
-- stdout --
	* [embed-certs-20210310205017-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-20210310205017-6496 in cluster embed-certs-20210310205017-6496
	* Restarting existing docker container for "embed-certs-20210310205017-6496" ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 21:12:41.864466   18444 out.go:239] Setting OutFile to fd 2560 ...
	I0310 21:12:41.865478   18444 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:12:41.865478   18444 out.go:252] Setting ErrFile to fd 1780...
	I0310 21:12:41.865478   18444 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:12:41.876384   18444 out.go:246] Setting JSON to false
	I0310 21:12:41.878392   18444 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36227,"bootTime":1615374534,"procs":118,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 21:12:41.879390   18444 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 21:12:41.883412   18444 out.go:129] * [embed-certs-20210310205017-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 21:12:41.886411   18444 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 21:12:41.897821   18444 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 21:12:42.439380   18444 docker.go:119] docker version: linux-20.10.2
	I0310 21:12:42.446315   18444 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:12:43.723542   18444 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.277228s)
	I0310 21:12:43.724888   18444 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:12:43.1218639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:12:43.728250   18444 out.go:129] * Using the docker driver based on existing profile
	I0310 21:12:43.729044   18444 start.go:276] selected driver: docker
	I0310 21:12:43.729044   18444 start.go:718] validating driver "docker" against &{Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:12:43.729311   18444 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 21:12:44.851133   18444 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:12:45.911277   18444 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0601454s)
	I0310 21:12:45.912501   18444 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:12:45.4750355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:12:45.913355   18444 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 21:12:45.913484   18444 start_flags.go:398] config:
	{Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:12:45.918095   18444 out.go:129] * Starting control plane node embed-certs-20210310205017-6496 in cluster embed-certs-20210310205017-6496
	I0310 21:12:46.600599   18444 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 21:12:46.600599   18444 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 21:12:46.601017   18444 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:12:46.601421   18444 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:12:46.601421   18444 cache.go:54] Caching tarball of preloaded images
	I0310 21:12:46.601731   18444 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 21:12:46.601731   18444 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 21:12:46.602007   18444 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\config.json ...
	I0310 21:12:46.616567   18444 cache.go:185] Successfully downloaded all kic artifacts
	I0310 21:12:46.622732   18444 start.go:313] acquiring machines lock for embed-certs-20210310205017-6496: {Name:mk5deb5478a17b664131b4c3205eef748b11179e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:12:46.624001   18444 start.go:317] acquired machines lock for "embed-certs-20210310205017-6496" in 300.2??s
	I0310 21:12:46.624373   18444 start.go:93] Skipping create...Using existing machine configuration
	I0310 21:12:46.624586   18444 fix.go:55] fixHost starting: 
	I0310 21:12:46.639912   18444 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}
	I0310 21:12:47.310427   18444 fix.go:108] recreateIfNeeded on embed-certs-20210310205017-6496: state=Stopped err=<nil>
	W0310 21:12:47.310427   18444 fix.go:134] unexpected machine state, will restart: <nil>
	I0310 21:12:47.314638   18444 out.go:129] * Restarting existing docker container for "embed-certs-20210310205017-6496" ...
	I0310 21:12:47.319764   18444 cli_runner.go:115] Run: docker start embed-certs-20210310205017-6496
	I0310 21:12:54.762225   18444 cli_runner.go:168] Completed: docker start embed-certs-20210310205017-6496: (7.4424719s)
	I0310 21:12:54.773282   18444 cli_runner.go:115] Run: docker container inspect embed-certs-20210310205017-6496 --format={{.State.Status}}
	I0310 21:12:55.405890   18444 kic.go:410] container "embed-certs-20210310205017-6496" state is running.
	I0310 21:12:55.438914   18444 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210310205017-6496
	I0310 21:12:56.104627   18444 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\config.json ...
	I0310 21:12:56.120914   18444 machine.go:88] provisioning docker machine ...
	I0310 21:12:56.121045   18444 ubuntu.go:169] provisioning hostname "embed-certs-20210310205017-6496"
	I0310 21:12:56.131918   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:12:56.741795   18444 main.go:121] libmachine: Using SSH client type: native
	I0310 21:12:56.743328   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	I0310 21:12:56.743564   18444 main.go:121] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210310205017-6496 && echo "embed-certs-20210310205017-6496" | sudo tee /etc/hostname
	I0310 21:12:56.757510   18444 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:12:59.783456   18444 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:13:02.798933   18444 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:13:07.819194   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: embed-certs-20210310205017-6496
	
	I0310 21:13:07.838772   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:13:08.438721   18444 main.go:121] libmachine: Using SSH client type: native
	I0310 21:13:08.439026   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	I0310 21:13:08.439026   18444 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210310205017-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210310205017-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210310205017-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 21:13:09.846781   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:13:09.847029   18444 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 21:13:09.847029   18444 ubuntu.go:177] setting up certificates
	I0310 21:13:09.847029   18444 provision.go:83] configureAuth start
	I0310 21:13:09.857215   18444 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210310205017-6496
	I0310 21:13:10.524620   18444 provision.go:137] copyHostCerts
	I0310 21:13:10.525147   18444 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 21:13:10.525431   18444 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 21:13:10.525817   18444 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 21:13:10.531398   18444 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 21:13:10.531398   18444 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 21:13:10.531945   18444 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 21:13:10.538062   18444 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 21:13:10.538062   18444 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 21:13:10.539306   18444 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 21:13:10.542016   18444 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.embed-certs-20210310205017-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20210310205017-6496]
	I0310 21:13:10.734235   18444 provision.go:165] copyRemoteCerts
	I0310 21:13:10.742139   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 21:13:10.751385   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:13:11.337830   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:13:12.461526   18444 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.7193887s)
	I0310 21:13:12.462290   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 21:13:13.480520   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0310 21:13:14.824810   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 21:13:16.109429   18444 provision.go:86] duration metric: configureAuth took 6.2624095s
	I0310 21:13:16.109429   18444 ubuntu.go:193] setting minikube options for container-runtime
	I0310 21:13:16.120584   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:13:16.713288   18444 main.go:121] libmachine: Using SSH client type: native
	I0310 21:13:16.714294   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	I0310 21:13:16.714294   18444 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 21:13:18.734749   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 21:13:18.734749   18444 ubuntu.go:71] root file system type: overlay
	I0310 21:13:18.735219   18444 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 21:13:18.742632   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:13:19.364241   18444 main.go:121] libmachine: Using SSH client type: native
	I0310 21:13:19.364241   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	I0310 21:13:19.365793   18444 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 21:13:21.300423   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 21:13:21.308894   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:13:21.934752   18444 main.go:121] libmachine: Using SSH client type: native
	I0310 21:13:21.935488   18444 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55183 <nil> <nil>}
	I0310 21:13:21.935808   18444 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 21:13:24.022228   18444 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:13:24.022499   18444 machine.go:91] provisioned docker machine in 27.9014928s
	I0310 21:13:24.022499   18444 start.go:267] post-start starting for "embed-certs-20210310205017-6496" (driver="docker")
	I0310 21:13:24.022499   18444 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 21:13:24.025597   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 21:13:24.039903   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:13:24.692780   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:13:25.877491   18444 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8518959s)
	I0310 21:13:25.894216   18444 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 21:13:26.002177   18444 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 21:13:26.002579   18444 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 21:13:26.002579   18444 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 21:13:26.002579   18444 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 21:13:26.002861   18444 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 21:13:26.003528   18444 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 21:13:26.007088   18444 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 21:13:26.007611   18444 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 21:13:26.020549   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 21:13:26.333222   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 21:13:27.063958   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 21:13:27.872391   18444 start.go:270] post-start completed in 3.8495904s
	I0310 21:13:27.886786   18444 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 21:13:27.894704   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:13:28.514187   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:13:29.370055   18444 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4830372s)
	I0310 21:13:29.370219   18444 fix.go:57] fixHost completed within 42.7459052s
	I0310 21:13:29.370219   18444 start.go:80] releasing machines lock for "embed-certs-20210310205017-6496", held for 42.7462779s
	I0310 21:13:29.388654   18444 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20210310205017-6496
	I0310 21:13:30.092636   18444 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 21:13:30.096865   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:13:30.096865   18444 ssh_runner.go:149] Run: systemctl --version
	I0310 21:13:30.111612   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:13:30.809856   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:13:30.868547   18444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55183 SSHKeyPath:C:\Users\jenkins\.minikube\machines\embed-certs-20210310205017-6496\id_rsa Username:docker}
	I0310 21:13:33.006142   18444 ssh_runner.go:189] Completed: systemctl --version: (2.9092807s)
	I0310 21:13:33.006142   18444 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.9135095s)
	I0310 21:13:33.024806   18444 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 21:13:33.414467   18444 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:13:33.678401   18444 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 21:13:33.690511   18444 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 21:13:33.986977   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 21:13:34.796782   18444 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:13:35.098284   18444 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:13:38.914658   18444 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (3.8157861s)
	I0310 21:13:38.923160   18444 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 21:14:04.360578   18444 ssh_runner.go:189] Completed: sudo systemctl start docker: (25.4374535s)
	I0310 21:14:04.377902   18444 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 21:14:06.781861   18444 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (2.4039621s)
	I0310 21:14:06.792543   18444 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 21:14:06.800238   18444 cli_runner.go:115] Run: docker exec -t embed-certs-20210310205017-6496 dig +short host.docker.internal
	I0310 21:14:08.636285   18444 cli_runner.go:168] Completed: docker exec -t embed-certs-20210310205017-6496 dig +short host.docker.internal: (1.8358704s)
	I0310 21:14:08.636451   18444 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 21:14:08.655564   18444 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 21:14:08.698946   18444 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:14:08.879534   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:14:09.472845   18444 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:14:09.473124   18444 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:14:09.480641   18444 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:14:11.047506   18444 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.5668664s)
	I0310 21:14:11.047979   18444 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	minikube-local-cache-test:functional-20210120214442-10992
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	busybox:1.28.4-glibc
	
	-- /stdout --
	I0310 21:14:11.047979   18444 docker.go:360] Images already preloaded, skipping extraction
	I0310 21:14:11.054361   18444 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:14:12.426908   18444 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.3725485s)
	I0310 21:14:12.426908   18444 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	minikube-local-cache-test:functional-20210120214442-10992
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	busybox:1.28.4-glibc
	
	-- /stdout --
	I0310 21:14:12.427322   18444 cache_images.go:73] Images are preloaded, skipping loading
	I0310 21:14:12.443022   18444 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 21:14:15.489095   18444 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (3.0460767s)
	I0310 21:14:15.489563   18444 cni.go:74] Creating CNI manager for ""
	I0310 21:14:15.489563   18444 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:14:15.489563   18444 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 21:14:15.489563   18444 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.97 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210310205017-6496 NodeName:embed-certs-20210310205017-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.97 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 21:14:15.490025   18444 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20210310205017-6496"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 21:14:15.490025   18444 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20210310205017-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 21:14:15.500645   18444 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 21:14:15.750709   18444 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 21:14:15.760620   18444 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 21:14:15.892420   18444 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0310 21:14:16.255217   18444 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 21:14:16.515128   18444 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1866 bytes)
	I0310 21:14:16.936554   18444 ssh_runner.go:149] Run: grep 192.168.49.97	control-plane.minikube.internal$ /etc/hosts
	I0310 21:14:17.051613   18444 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:14:17.249339   18444 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496 for IP: 192.168.49.97
	I0310 21:14:17.250054   18444 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 21:14:17.250374   18444 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 21:14:17.251142   18444 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\client.key
	I0310 21:14:17.251452   18444 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key.b6188fac
	I0310 21:14:17.251761   18444 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.key
	I0310 21:14:17.253727   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 21:14:17.254281   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.254457   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 21:14:17.254818   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.254818   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 21:14:17.255513   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.255694   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 21:14:17.255953   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.256240   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 21:14:17.256649   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.256874   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 21:14:17.257184   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.257184   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 21:14:17.257607   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.257607   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 21:14:17.258151   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.258281   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 21:14:17.258570   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.258570   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 21:14:17.259035   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.259035   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 21:14:17.259503   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.259503   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 21:14:17.260201   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.260416   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 21:14:17.260745   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.260745   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 21:14:17.261286   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.261365   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 21:14:17.261697   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.261972   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 21:14:17.262248   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.262524   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 21:14:17.262881   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.262881   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 21:14:17.262881   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.264568   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 21:14:17.264932   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.264932   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 21:14:17.265561   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.265909   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 21:14:17.265909   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.265909   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 21:14:17.277114   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.277114   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 21:14:17.277440   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.277830   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 21:14:17.278449   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.278674   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 21:14:17.278997   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.278997   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 21:14:17.279584   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.279584   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 21:14:17.280544   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.280544   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 21:14:17.281012   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.281450   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 21:14:17.282006   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.282585   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 21:14:17.283107   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.283514   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 21:14:17.284122   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.284334   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 21:14:17.284876   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.284876   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 21:14:17.285871   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.286115   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 21:14:17.286504   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.286730   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 21:14:17.286919   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.287264   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 21:14:17.288137   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.288137   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 21:14:17.288701   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.289113   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 21:14:17.289652   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.289905   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 21:14:17.290450   18444 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 21:14:17.291084   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 21:14:17.291910   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 21:14:17.292477   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 21:14:17.294042   18444 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 21:14:17.302799   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 21:14:17.611552   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0310 21:14:18.027309   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 21:14:18.697813   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\embed-certs-20210310205017-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0310 21:14:19.297438   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 21:14:20.100491   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 21:14:20.663291   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 21:14:21.082385   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 21:14:21.506044   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 21:14:21.964278   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 21:14:22.582588   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 21:14:23.269984   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 21:14:23.804639   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 21:14:24.325048   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 21:14:25.179814   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 21:14:25.526443   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 21:14:25.846574   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 21:14:26.544098   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 21:14:27.372232   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 21:14:28.020652   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 21:14:28.569180   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 21:14:29.187113   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 21:14:29.703522   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 21:14:29.940505   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 21:14:30.330901   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 21:14:30.761712   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 21:14:31.386556   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 21:14:31.743984   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 21:14:32.308527   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 21:14:32.876666   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 21:14:33.393082   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 21:14:33.869324   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 21:14:34.519740   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 21:14:34.897144   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 21:14:35.399582   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 21:14:35.995819   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 21:14:36.567107   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 21:14:37.251245   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 21:14:37.644297   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 21:14:38.203137   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 21:14:38.693835   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 21:14:39.210694   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 21:14:39.971743   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 21:14:40.475216   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 21:14:41.320098   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 21:14:42.232973   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 21:14:42.908772   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 21:14:43.781716   18444 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 21:14:44.696746   18444 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 21:14:45.390443   18444 ssh_runner.go:149] Run: openssl version
	I0310 21:14:45.482437   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 21:14:45.651988   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 21:14:45.772783   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 21:14:45.778553   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 21:14:45.843480   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:46.174305   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 21:14:46.428570   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 21:14:46.576065   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 21:14:46.593310   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 21:14:46.698018   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:46.848356   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 21:14:47.115394   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 21:14:47.176091   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 21:14:47.198899   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 21:14:47.340380   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:47.513949   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 21:14:47.905928   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 21:14:47.951951   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 21:14:47.961932   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 21:14:48.057359   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:48.269099   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 21:14:48.532324   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:14:48.654474   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:14:48.663602   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:14:48.758655   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 21:14:49.078686   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 21:14:49.267316   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 21:14:49.349285   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 21:14:49.357390   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 21:14:49.519644   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:49.651422   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 21:14:49.804006   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 21:14:49.939077   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 21:14:49.948504   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 21:14:50.028179   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:50.158520   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 21:14:50.412166   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 21:14:50.462274   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 21:14:50.473068   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 21:14:50.538722   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:50.720604   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 21:14:50.888196   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 21:14:50.946236   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 21:14:50.957481   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 21:14:51.071753   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:51.194145   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 21:14:51.332505   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 21:14:51.426975   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 21:14:51.446515   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 21:14:51.614421   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:51.713063   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 21:14:51.995049   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 21:14:52.065898   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 21:14:52.075253   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 21:14:52.159413   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:52.288203   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 21:14:52.480443   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 21:14:52.576842   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 21:14:52.582610   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 21:14:52.655285   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:52.783089   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 21:14:53.084901   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 21:14:53.174625   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 21:14:53.191753   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 21:14:53.314555   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:53.501263   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 21:14:53.739028   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 21:14:53.886737   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 21:14:53.895999   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 21:14:53.999562   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:54.199190   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 21:14:54.407107   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 21:14:54.569537   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 21:14:54.578544   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 21:14:54.884058   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:55.136671   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 21:14:55.368519   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 21:14:55.472906   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 21:14:55.484842   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 21:14:55.577120   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:55.897875   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 21:14:56.171569   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 21:14:56.268744   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 21:14:56.287647   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 21:14:56.392836   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:56.624005   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 21:14:56.841062   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 21:14:56.942666   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 21:14:56.949144   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 21:14:57.118875   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:57.244198   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 21:14:57.313203   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 21:14:57.366950   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 21:14:57.372633   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 21:14:57.434836   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:57.572986   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 21:14:57.655427   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 21:14:57.706486   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 21:14:57.721801   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 21:14:57.790260   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:57.938162   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 21:14:58.054647   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 21:14:58.104137   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 21:14:58.111662   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 21:14:58.189212   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:58.263191   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 21:14:58.376771   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 21:14:58.425770   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 21:14:58.437632   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 21:14:58.616826   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:58.715770   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 21:14:58.834428   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 21:14:58.886671   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 21:14:58.899643   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 21:14:58.953679   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:59.059831   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 21:14:59.137108   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 21:14:59.192036   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 21:14:59.209924   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 21:14:59.320419   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:59.440461   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 21:14:59.513814   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 21:14:59.560093   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 21:14:59.570575   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 21:14:59.623821   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 21:14:59.712850   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 21:14:59.781824   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 21:14:59.823502   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 21:14:59.851978   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 21:14:59.975766   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:00.185448   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 21:15:00.328974   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 21:15:00.393268   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 21:15:00.402436   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 21:15:00.515467   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:00.720715   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 21:15:00.983291   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 21:15:01.092070   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 21:15:01.103938   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 21:15:01.263896   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:01.380016   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 21:15:01.662667   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 21:15:01.703100   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 21:15:01.720489   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 21:15:01.836920   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:01.978976   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 21:15:02.169086   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 21:15:02.244306   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 21:15:02.254536   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 21:15:02.335163   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:02.516818   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 21:15:02.740532   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 21:15:02.806576   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 21:15:02.832314   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 21:15:02.962644   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:03.213543   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 21:15:03.343122   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 21:15:03.405176   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 21:15:03.418818   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 21:15:03.542282   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:03.718148   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 21:15:03.897455   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 21:15:03.970787   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 21:15:03.983703   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 21:15:04.066741   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:04.252461   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 21:15:04.370896   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 21:15:04.419182   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 21:15:04.428315   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 21:15:04.476459   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:04.590443   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 21:15:04.711283   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 21:15:04.768589   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 21:15:04.778887   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 21:15:04.843812   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:04.962085   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 21:15:05.108172   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 21:15:05.151516   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 21:15:05.165642   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 21:15:05.259601   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:05.357646   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 21:15:05.532844   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 21:15:05.586807   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 21:15:05.599176   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 21:15:05.672972   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:05.757884   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 21:15:05.853140   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 21:15:05.894233   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 21:15:05.904366   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 21:15:06.072308   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:06.139834   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 21:15:06.212457   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 21:15:06.281733   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 21:15:06.300681   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 21:15:06.368052   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:06.439828   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 21:15:06.538605   18444 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 21:15:06.564576   18444 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 21:15:06.571485   18444 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 21:15:06.618461   18444 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 21:15:06.711357   18444 kubeadm.go:385] StartCluster: {Name:embed-certs-20210310205017-6496 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:embed-certs-20210310205017-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:15:06.719861   18444 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:15:07.467654   18444 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 21:15:07.540086   18444 kubeadm.go:396] found existing configuration files, will attempt cluster restart
	I0310 21:15:07.540086   18444 kubeadm.go:594] restartCluster start
	I0310 21:15:07.549651   18444 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0310 21:15:07.678179   18444 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0310 21:15:07.687092   18444 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20210310205017-6496
	I0310 21:15:08.302335   18444 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210310205017-6496" does not appear in C:\Users\jenkins/.kube/config
	I0310 21:15:08.303480   18444 kubeconfig.go:128] "embed-certs-20210310205017-6496" context is missing from C:\Users\jenkins/.kube/config - will repair!
	I0310 21:15:08.305419   18444 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:15:08.349094   18444 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0310 21:15:08.403289   18444 api_server.go:146] Checking apiserver status ...
	I0310 21:15:08.413704   18444 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0310 21:15:08.635712   18444 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0310 21:15:08.635712   18444 kubeadm.go:573] needs reconfigure: apiserver in state Stopped
	I0310 21:15:08.635712   18444 kubeadm.go:1042] stopping kube-system containers ...
	I0310 21:15:08.642852   18444 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:15:09.443248   18444 docker.go:261] Stopping containers: [765eeaf3ce81 13e03f4b1775 6402c6e4e6d4 4913edb02239 233e14c5554f 996876ed91c1 aae206460c76 78c1a80b774c 55e5c1ff0487 4aeafe69b026 efd3086c1be7 6579ac6125a2 2f3e9943b267 208e864728a3 62844ce92fdb]
	I0310 21:15:09.451211   18444 ssh_runner.go:149] Run: docker stop 765eeaf3ce81 13e03f4b1775 6402c6e4e6d4 4913edb02239 233e14c5554f 996876ed91c1 aae206460c76 78c1a80b774c 55e5c1ff0487 4aeafe69b026 efd3086c1be7 6579ac6125a2 2f3e9943b267 208e864728a3 62844ce92fdb
	I0310 21:15:10.205925   18444 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0310 21:15:10.363742   18444 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:15:10.474757   18444 kubeadm.go:153] found existing configuration files:
	-rw------- 1 root root 5611 Mar 10 20:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5629 Mar 10 20:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Mar 10 21:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5581 Mar 10 20:57 /etc/kubernetes/scheduler.conf
	
	I0310 21:15:10.484743   18444 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0310 21:15:10.580321   18444 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0310 21:15:10.651343   18444 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0310 21:15:10.743280   18444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0310 21:15:10.755331   18444 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0310 21:15:10.831756   18444 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0310 21:15:10.899262   18444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0310 21:15:10.909871   18444 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0310 21:15:10.979102   18444 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 21:15:11.167879   18444 kubeadm.go:670] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0310 21:15:11.167879   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:15:16.107272   18444 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": (4.9393999s)
	I0310 21:15:16.107479   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:15:25.388458   18444 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (9.2809918s)
	I0310 21:15:25.389697   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:15:30.220980   18444 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (4.8312897s)
	I0310 21:15:30.220980   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:15:38.218098   18444 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml": (7.9971298s)
	I0310 21:15:38.218619   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:15:45.657442   18444 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml": (7.4388332s)
	I0310 21:15:45.657871   18444 kubeadm.go:687] waiting for restarted kubelet to initialise ...
	I0310 21:15:45.668712   18444 retry.go:31] will retry after 276.165072ms: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:15:45.952337   18444 retry.go:31] will retry after 540.190908ms: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:15:46.503973   18444 retry.go:31] will retry after 655.06503ms: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:15:47.168428   18444 retry.go:31] will retry after 791.196345ms: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:15:47.979135   18444 retry.go:31] will retry after 1.170244332s: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:15:49.162930   18444 retry.go:31] will retry after 2.253109428s: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:15:51.429346   18444 retry.go:31] will retry after 1.610739793s: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:15:53.048800   18444 retry.go:31] will retry after 2.804311738s: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:15:55.862392   18444 retry.go:31] will retry after 3.824918958s: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:15:59.696014   18444 retry.go:31] will retry after 7.69743562s: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:16:07.423737   18444 retry.go:31] will retry after 14.635568968s: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:16:22.075085   18444 retry.go:31] will retry after 28.406662371s: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:16:50.494137   18444 kubeadm.go:704] kubelet initialised
	I0310 21:16:50.494473   18444 kubeadm.go:705] duration metric: took 1m4.8366926s waiting for restarted kubelet to initialise ...
	I0310 21:16:50.494680   18444 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	I0310 21:16:50.494991   18444 pod_ready.go:59] waiting 4m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	I0310 21:16:50.500657   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:16:51.011622   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:16:51.511319   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:16:52.015706   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:16:52.508191   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:16:53.012452   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:17:03.508657   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:17:14.006877   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:17:24.505332   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:17:35.004929   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:17:45.510786   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:17:56.002851   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:18:06.503629   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:18:17.002856   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:18:27.507153   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:18:37.510872   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:18:48.008606   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:18:58.504511   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:19:09.004390   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:19:19.503369   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	I0310 21:19:34.628575   18444 pod_ready.go:97] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:07:05 +0000 GMT Reason: Message:}
	I0310 21:19:34.628575   18444 pod_ready.go:62] duration metric: took 2m44.1338103s to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	I0310 21:19:34.628575   18444 pod_ready.go:59] waiting 4m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	I0310 21:19:34.831294   18444 pod_ready.go:97] pod "etcd-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:32 +0000 GMT Reason: Message:}
	I0310 21:19:34.831697   18444 pod_ready.go:62] duration metric: took 203.1218ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	I0310 21:19:34.831697   18444 pod_ready.go:59] waiting 4m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	I0310 21:19:35.216363   18444 pod_ready.go:97] pod "kube-apiserver-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:01:39 +0000 GMT Reason: Message:}
	I0310 21:19:35.216363   18444 pod_ready.go:62] duration metric: took 384.6666ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	I0310 21:19:35.216363   18444 pod_ready.go:59] waiting 4m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	I0310 21:19:36.282262   18444 pod_ready.go:97] pod "kube-controller-manager-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:21 +0000 GMT Reason: Message:}
	I0310 21:19:36.282262   18444 pod_ready.go:62] duration metric: took 1.0659003s to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	I0310 21:19:36.282262   18444 pod_ready.go:59] waiting 4m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	I0310 21:19:36.934987   18444 pod_ready.go:97] pod "kube-proxy-p6jnj" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:05:51 +0000 GMT Reason: Message:}
	I0310 21:19:36.935269   18444 pod_ready.go:62] duration metric: took 653.0075ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	I0310 21:19:36.935269   18444 pod_ready.go:59] waiting 4m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	I0310 21:19:37.212454   18444 pod_ready.go:97] pod "kube-scheduler-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:01:35 +0000 GMT Reason: Message:}
	I0310 21:19:37.212454   18444 pod_ready.go:62] duration metric: took 277.1856ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	I0310 21:19:37.212454   18444 pod_ready.go:39] duration metric: took 2m46.718004s for extra waiting for kube-system core pods to be Ready ...
	I0310 21:19:37.212454   18444 api_server.go:48] waiting for apiserver process to appear ...
	I0310 21:19:37.223334   18444 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 21:19:39.773679   18444 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.5503486s)
	I0310 21:19:39.773679   18444 api_server.go:68] duration metric: took 2.5612286s to wait for apiserver process to appear ...
	I0310 21:19:39.773679   18444 api_server.go:84] waiting for apiserver healthz status ...
	I0310 21:19:39.773679   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:40.429243   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:40.430008   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:40.930915   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:41.627200   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:41.627321   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:41.931040   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:42.022519   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:42.022519   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:42.430808   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:42.946260   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:42.947201   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:43.430619   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:44.610312   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:44.610312   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:44.931360   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:45.263335   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:45.263668   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:45.431546   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:46.048370   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:46.048370   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:46.430636   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:47.461568   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:47.461952   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:47.931190   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:48.606053   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:48.606267   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:48.930898   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:49.854335   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:49.854335   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:49.931570   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:50.490410   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:50.490410   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:50.931009   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:51.332886   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:51.332886   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:51.430742   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:51.954994   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:51.955684   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:52.441912   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:53.025872   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:53.026908   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:53.431176   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:56.691012   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:56.691012   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:56.930703   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:57.986508   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0310 21:19:57.986508   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0310 21:19:58.432704   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	I0310 21:19:59.444802   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 200:
	ok
	I0310 21:19:59.542663   18444 api_server.go:137] control plane version: v1.20.2
	I0310 21:19:59.542663   18444 api_server.go:127] duration metric: took 19.7690093s to wait for apiserver health ...
	I0310 21:19:59.542911   18444 cni.go:74] Creating CNI manager for ""
	I0310 21:19:59.542911   18444 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:19:59.542911   18444 system_pods.go:41] waiting for kube-system pods to appear ...
	I0310 21:19:59.994482   18444 system_pods.go:57] 7 kube-system pods found
	I0310 21:19:59.994482   18444 system_pods.go:59] "coredns-74ff55c5b-4w6mn" [0b339996-09da-4e8b-82cb-967e22a2b12a] Running
	I0310 21:19:59.994482   18444 system_pods.go:59] "etcd-embed-certs-20210310205017-6496" [f5043b9b-833a-4260-9106-ceecd7868ac4] Running
	I0310 21:19:59.994482   18444 system_pods.go:59] "kube-apiserver-embed-certs-20210310205017-6496" [2caeba21-12bc-4e46-9383-776709339a99] Running
	I0310 21:19:59.994482   18444 system_pods.go:59] "kube-controller-manager-embed-certs-20210310205017-6496" [f21834cd-7a9e-4aa5-b349-41acc025428d] Running
	I0310 21:19:59.994482   18444 system_pods.go:59] "kube-proxy-p6jnj" [b4673698-b2df-494d-8de6-1008fa8348af] Running
	I0310 21:19:59.994482   18444 system_pods.go:59] "kube-scheduler-embed-certs-20210310205017-6496" [fc2f78fc-009b-4a87-accf-3e42164fb38e] Running
	I0310 21:19:59.994482   18444 system_pods.go:59] "storage-provisioner" [2659761d-6d3f-43ea-b1d9-04ec50811e6f] Running
	I0310 21:19:59.994482   18444 system_pods.go:72] duration metric: took 451.5717ms to wait for pod list to return data ...
	I0310 21:19:59.994482   18444 node_conditions.go:101] verifying NodePressure condition ...
	I0310 21:20:00.241941   18444 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	I0310 21:20:00.242235   18444 node_conditions.go:122] node cpu capacity is 4
	I0310 21:20:00.242427   18444 node_conditions.go:104] duration metric: took 247.9447ms to run NodePressure ...
	I0310 21:20:00.242427   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"

                                                
                                                
** /stderr **
start_stop_delete_test.go:199: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p embed-certs-20210310205017-6496 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.20.2": exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect embed-certs-20210310205017-6496
helpers_test.go:231: (dbg) docker inspect embed-certs-20210310205017-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb",
	        "Created": "2021-03-10T20:50:38.3818436Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296947,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T21:12:54.665856Z",
	            "FinishedAt": "2021-03-10T21:12:34.9239518Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb/hosts",
	        "LogPath": "/var/lib/docker/containers/34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb/34279bb86a24cccbd741b3dff476151db424d0a279c3695780536f691f5042eb-json.log",
	        "Name": "/embed-certs-20210310205017-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20210310205017-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210310205017-6496",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dcfd102322cab94eedae4c6a78b3d5341d2b0fef2ffb51299de38c2755ea8f34-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dcfd102322cab94eedae4c6a78b3d5341d2b0fef2ffb51299de38c2755ea8f34/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dcfd102322cab94eedae4c6a78b3d5341d2b0fef2ffb51299de38c2755ea8f34/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dcfd102322cab94eedae4c6a78b3d5341d2b0fef2ffb51299de38c2755ea8f34/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210310205017-6496",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210310205017-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210310205017-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210310205017-6496",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210310205017-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56e54ae56a4ac687f23f380dc95974511271c9bf3a52fe9c9680aed0f3a3ac03",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55183"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55179"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55181"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55180"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/56e54ae56a4a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210310205017-6496": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.97"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "34279bb86a24",
	                        "embed-certs-20210310205017-6496"
	                    ],
	                    "NetworkID": "beda76989846335b2108542b8dbd47960c451d6da6c3d41d5b12bd1840f4b292",
	                    "EndpointID": "681ead3f449d33184d048f6afb893b97e4a5df4616cd6edd299047fd49befb8e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:61",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210310205017-6496 -n embed-certs-20210310205017-6496

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210310205017-6496 -n embed-certs-20210310205017-6496: (38.8552842s)
helpers_test.go:240: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-20210310205017-6496 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p embed-certs-20210310205017-6496 logs -n 25: (3m37.403147s)
helpers_test.go:248: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 21:13:03 UTC, end at Wed 2021-03-10 21:22:19 UTC. --
	* Mar 10 21:13:10 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:10.213893000Z" level=info msg="Loading containers: start."
	* Mar 10 21:13:27 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:27.064918800Z" level=info msg="Removing stale sandbox 7e27126f8767a26737499e438b6f4b58ed5b43a16959e5d212ae6c2f784d24ba (13e03f4b17759b7140d0f257f69fdaaa96e6d755a2381d915f56a68481084f75)"
	* Mar 10 21:13:27 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:27.285567000Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 962934bf8776bde55cbcd1cdb10172ce6c0f640628c00d9fa9ced1ab61ca59e6 7d710580825164f2900866759ee2184103fdee583f2f4718881abdbf28a219a4], retrying...."
	* Mar 10 21:13:32 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:32.749641500Z" level=info msg="Removing stale sandbox a6fb2d7b89ba08e8164f38d6229c0d6f9d8fc0f195b5b66e75b5152037650981 (2f3e9943b2674b68ec3ac5d141c3cc8254413a7093384e2636607ffb925855d3)"
	* Mar 10 21:13:32 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:32.972600700Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 962934bf8776bde55cbcd1cdb10172ce6c0f640628c00d9fa9ced1ab61ca59e6 31d15ae29a3948b334ebdf8cb69033e52f14354f6cf1a4c66116c69284dc0bca], retrying...."
	* Mar 10 21:13:40 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:40.031176100Z" level=info msg="Removing stale sandbox b4c159f963771947b021203b68b3b01516196c84bf161111fbd7d7c385424b56 (62844ce92fdb297d491ac3f741510a72acdb7859306268d27a96764bef91ed0d)"
	* Mar 10 21:13:40 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:40.161310300Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 962934bf8776bde55cbcd1cdb10172ce6c0f640628c00d9fa9ced1ab61ca59e6 9f6822b6aa7766f5c719b4bab7903d6b6918515078682bc8c650f1c5e8a2e57e], retrying...."
	* Mar 10 21:13:44 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:44.225198900Z" level=info msg="Removing stale sandbox c04e2a8fc01446798f807c6f960c521d218d942400aab60178f3ab0221f0157b (6579ac6125a27110cb30756ab073d408b0da27b951ea66dc9cd772ef6021f6c8)"
	* Mar 10 21:13:44 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:44.239457100Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 962934bf8776bde55cbcd1cdb10172ce6c0f640628c00d9fa9ced1ab61ca59e6 39b3b96872f40b0a5fa6c16d0b2dfec468b8fbb79cd505969faa0acffe6514c9], retrying...."
	* Mar 10 21:13:47 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:47.516904000Z" level=info msg="Removing stale sandbox d95ff16ff6874ca390fc947c7ce440774cb856567c0165a40295ba441c9ebc30 (208e864728a39f5495fd6cf2f428797c85da0b594aaa5be4cf69bd17ca025be4)"
	* Mar 10 21:13:47 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:47.703525000Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 962934bf8776bde55cbcd1cdb10172ce6c0f640628c00d9fa9ced1ab61ca59e6 b3d550e2f7b10e7d0f0c052475496a8b970b07650ea17180fd5ba58038aec873], retrying...."
	* Mar 10 21:13:51 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:51.214566500Z" level=info msg="Removing stale sandbox f4bf3a481836ae3547d8a029aa3d67454b96db863159e0787147cbfce0516123 (233e14c5554ff268eb7ca0d7566127c65dab8f0579a758a320ac811ec576c8e1)"
	* Mar 10 21:13:53 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:51.326521900Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint eacfe3747a8429727fb782923bca8d8c1cf998f01fb79bd72d6ebed17d53d738 5e43e0d8d647c1f13b7478b574b35c52ef9852d3ed4e6c40fb91f681e2058925], retrying...."
	* Mar 10 21:13:55 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:55.288558500Z" level=info msg="Removing stale sandbox 1b2e934641428094219688ebcb6ab88ef35c189ae34391f51f0638d5ae1aedbf (a0cd475a9d04b30463b62eea9fb4b7b973538586631a6146d80295b25deb6803)"
	* Mar 10 21:13:55 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:55.965779900Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint eacfe3747a8429727fb782923bca8d8c1cf998f01fb79bd72d6ebed17d53d738 be6186f8d26452c7b940aca845c560cfd21d34b3c0fc7515dc61bd014693d961], retrying...."
	* Mar 10 21:13:59 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:59.083206700Z" level=info msg="Removing stale sandbox 4acb52c810c642aac665bb7570d712cd7df3ebb13d3f7e64ec788c16f0b7feac (996876ed91c140a9893652dce742e0862e082a88c5a0d696482a1acf58f5757f)"
	* Mar 10 21:13:59 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:13:59.220520700Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 962934bf8776bde55cbcd1cdb10172ce6c0f640628c00d9fa9ced1ab61ca59e6 36cd62b8b54e1ba5b97df59c7575250aef4cb0af630662f05fc9e070f306a28a], retrying...."
	* Mar 10 21:14:01 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:14:01.230416000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 21:14:02 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:14:02.807565200Z" level=info msg="Loading containers: done."
	* Mar 10 21:14:03 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:14:03.479629000Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 21:14:03 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:14:03.480149000Z" level=info msg="Daemon has completed initialization"
	* Mar 10 21:14:04 embed-certs-20210310205017-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 21:14:04 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:14:04.846401700Z" level=info msg="API listen on [::]:2376"
	* Mar 10 21:14:05 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:14:05.425584800Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 21:18:04 embed-certs-20210310205017-6496 dockerd[218]: time="2021-03-10T21:18:04.570660000Z" level=info msg="ignoring event" container=9d6d3c37e198e1cfefe51fafe1ab18af8e3bbd3ea00a53c646a2b6c2e72683fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 27f2d5bb41043       a27166429d98e                                                                     3 minutes ago       Running             kube-controller-manager   5                   520cc42101463
	* a4f55d560a212       0369cf4303ffd                                                                     5 minutes ago       Running             etcd                      1                   bad2b5261e0a6
	* f4b5b19203e94       ed2c44fbdd78b                                                                     5 minutes ago       Running             kube-scheduler            1                   c35bb00feb329
	* f84c46dba1d9e       a8c2fdb8bf76e                                                                     5 minutes ago       Running             kube-apiserver            1                   682aad5680103
	* 6692b664f48a9       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1   11 minutes ago      Exited              busybox                   0                   a0cd475a9d04b
	* 765eeaf3ce811       85069258b98ac                                                                     15 minutes ago      Exited              storage-provisioner       0                   13e03f4b17759
	* 6402c6e4e6d47       bfe3a36ebd252                                                                     16 minutes ago      Exited              coredns                   0                   233e14c5554ff
	* 4913edb022394       43154ddb57a83                                                                     17 minutes ago      Exited              kube-proxy                0                   996876ed91c14
	* efd3086c1be70       0369cf4303ffd                                                                     24 minutes ago      Exited              etcd                      0                   2f3e9943b2674
	* 
	* ==> coredns [6402c6e4e6d4] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210310205017-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=embed-certs-20210310205017-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=embed-certs-20210310205017-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T21_00_55_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 20:59:25 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  embed-certs-20210310205017-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 21:23:28 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 21:19:56 +0000   Wed, 10 Mar 2021 20:59:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 21:19:56 +0000   Wed, 10 Mar 2021 20:59:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 21:19:56 +0000   Wed, 10 Mar 2021 20:59:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 21:19:56 +0000   Wed, 10 Mar 2021 21:01:43 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.97
	*   Hostname:    embed-certs-20210310205017-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                788297b8-7aee-4d9f-9286-5da206103441
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (8 in total)
	*   Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	*   default                     busybox                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	*   kube-system                 coredns-74ff55c5b-4w6mn                                    100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     20m
	*   kube-system                 etcd-embed-certs-20210310205017-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         23m
	*   kube-system                 kube-apiserver-embed-certs-20210310205017-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         23m
	*   kube-system                 kube-controller-manager-embed-certs-20210310205017-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	*   kube-system                 kube-proxy-p6jnj                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	*   kube-system                 kube-scheduler-embed-certs-20210310205017-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         23m
	*   kube-system                 storage-provisioner                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                     From        Message
	*   ----    ------                   ----                    ----        -------
	*   Normal  Starting                 22m                     kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  22m                     kubelet     Node embed-certs-20210310205017-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    22m                     kubelet     Node embed-certs-20210310205017-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     22m                     kubelet     Node embed-certs-20210310205017-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             22m                     kubelet     Node embed-certs-20210310205017-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  22m                     kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                21m                     kubelet     Node embed-certs-20210310205017-6496 status is now: NodeReady
	*   Normal  Starting                 17m                     kube-proxy  Starting kube-proxy.
	*   Normal  NodeAllocatableEnforced  7m46s                   kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientPID     7m43s (x7 over 7m52s)   kubelet     Node embed-certs-20210310205017-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeHasNoDiskPressure    7m41s (x8 over 7m52s)   kubelet     Node embed-certs-20210310205017-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientMemory  6m49s (x26 over 7m52s)  kubelet     Node embed-certs-20210310205017-6496 status is now: NodeHasSufficientMemory
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [a4f55d560a21] <==
	* 2021-03-10 21:22:14.116223 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-controller-manager-embed-certs-20210310205017-6496.166b17c15759e3e8\" " with result "range_response_count:1 size:899" took too long (206.3268ms) to execute
	* 2021-03-10 21:22:14.832193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:22:24.675624 W | etcdserver: request "header:<ID:10490704452246983705 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1196781dffca5418>" with result "size:41" took too long (146.2571ms) to execute
	* 2021-03-10 21:22:27.215218 W | etcdserver: read-only range request "key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true " with result "range_response_count:0 size:5" took too long (277.085ms) to execute
	* 2021-03-10 21:22:27.759471 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:22:37.038580 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:22:37.646015 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:485" took too long (106.2612ms) to execute
	* 2021-03-10 21:22:39.046898 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:120" took too long (268.0021ms) to execute
	* 2021-03-10 21:22:39.204843 W | etcdserver: read-only range request "key:\"/registry/flowschemas/catch-all\" " with result "range_response_count:1 size:990" took too long (414.072ms) to execute
	* 2021-03-10 21:22:44.909540 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (184.8823ms) to execute
	* 2021-03-10 21:22:45.895008 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (168.3235ms) to execute
	* 2021-03-10 21:22:50.676707 W | etcdserver: request "header:<ID:10490704452246983803 > lease_revoke:<id:1196781dffca5448>" with result "size:29" took too long (113.9254ms) to execute
	* 2021-03-10 21:22:51.084162 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true " with result "range_response_count:0 size:7" took too long (145.9749ms) to execute
	* 2021-03-10 21:22:51.148815 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-embed-certs-20210310205017-6496\" " with result "range_response_count:1 size:5274" took too long (189.1399ms) to execute
	* 2021-03-10 21:22:52.776390 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:22:57.534064 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-20210310205017-6496\" " with result "range_response_count:1 size:6873" took too long (135.015ms) to execute
	* 2021-03-10 21:22:58.710100 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:22:59.599933 W | etcdserver: request "header:<ID:10490704452246983831 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20210310205017-6496.166b1816c78d6540\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20210310205017-6496.166b1816c78d6540\" value_size:826 lease:1267332415392208021 >> failure:<>>" with result "size:16" took too long (103.3495ms) to execute
	* 2021-03-10 21:23:08.572877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:23:21.346746 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:23:37.916967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:23:40.922252 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:23:45.000841 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:23:47.773565 W | etcdserver: read-only range request "key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true " with result "range_response_count:0 size:7" took too long (158.6965ms) to execute
	* 2021-03-10 21:23:47.773764 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.49.97\" " with result "range_response_count:1 size:135" took too long (184.8969ms) to execute
	* 
	* ==> etcd [efd3086c1be7] <==
	* 2021-03-10 21:10:36.405242 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:10:46.565043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:10:47.206142 W | etcdserver: read-only range request "key:\"/registry/statefulsets/\" range_end:\"/registry/statefulsets0\" count_only:true " with result "range_response_count:0 size:5" took too long (136.3769ms) to execute
	* 2021-03-10 21:10:56.701368 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:11:07.577739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:11:07.793242 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2217" took too long (131.1046ms) to execute
	* 2021-03-10 21:11:16.420589 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:11:27.857788 W | etcdserver: request "header:<ID:10490704451955658173 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.97\" mod_revision:712 > success:<request_put:<key:\"/registry/masterleases/192.168.49.97\" value_size:68 lease:1267332415100882363 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.97\" > >>" with result "size:16" took too long (170.5214ms) to execute
	* 2021-03-10 21:11:27.894609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:11:35.537503 W | etcdserver: request "header:<ID:10490704451955658213 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/busybox\" mod_revision:669 > success:<request_put:<key:\"/registry/pods/default/busybox\" value_size:2115 >> failure:<request_range:<key:\"/registry/pods/default/busybox\" > >>" with result "size:16" took too long (175.0758ms) to execute
	* 2021-03-10 21:11:37.419715 W | etcdserver: request "header:<ID:10490704451955658224 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1196781dee6d0def>" with result "size:41" took too long (128.4123ms) to execute
	* 2021-03-10 21:11:39.162625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:11:42.347172 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (112.2665ms) to execute
	* 2021-03-10 21:11:48.500834 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:11:48.571806 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (472.7113ms) to execute
	* 2021-03-10 21:11:48.601690 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (508.3331ms) to execute
	* 2021-03-10 21:11:48.815819 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.49.97\" " with result "range_response_count:1 size:135" took too long (100.5352ms) to execute
	* 2021-03-10 21:11:57.244217 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:12:06.865089 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:12:18.088242 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (155.2058ms) to execute
	* 2021-03-10 21:12:22.458944 W | etcdserver: request "header:<ID:10490704451955658379 > lease_revoke:<id:1196781dee6d0e61>" with result "size:29" took too long (120.0733ms) to execute
	* 2021-03-10 21:12:23.535529 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2021/03/10 21:12:23 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 2021-03-10 21:12:23.932235 I | etcdserver: skipped leadership transfer for single voting member cluster
	* WARNING: 2021/03/10 21:12:23 grpc: addrConn.createTransport failed to connect to {192.168.49.97:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.49.97:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kernel <==
	*  21:23:56 up  2:24,  0 users,  load average: 123.68, 127.74, 137.64
	* Linux embed-certs-20210310205017-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [f84c46dba1d9] <==
	* Trace[684199403]: [644.1907ms] [644.1907ms] END
	* I0310 21:23:04.005958       1 trace.go:205] Trace[834806857]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.49.97 (10-Mar-2021 21:23:03.331) (total time: 674ms):
	* Trace[834806857]: ---"About to convert to expected version" 283ms (21:23:00.614)
	* Trace[834806857]: ---"Object stored in database" 384ms (21:23:00.999)
	* Trace[834806857]: [674.2729ms] [674.2729ms] END
	* I0310 21:23:07.600736       1 trace.go:205] Trace[2026083493]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.49.97 (10-Mar-2021 21:23:06.829) (total time: 770ms):
	* Trace[2026083493]: ---"About to convert to expected version" 700ms (21:23:00.530)
	* Trace[2026083493]: [770.9277ms] [770.9277ms] END
	* I0310 21:23:13.522161       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:23:13.533436       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:23:13.533599       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 21:23:36.514985       1 trace.go:205] Trace[720829046]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-20210310205017-6496,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.49.97 (10-Mar-2021 21:23:35.837) (total time: 677ms):
	* Trace[720829046]: ---"About to convert to expected version" 226ms (21:23:00.064)
	* Trace[720829046]: ---"Object stored in database" 447ms (21:23:00.512)
	* Trace[720829046]: [677.4656ms] [677.4656ms] END
	* I0310 21:23:41.020429       1 trace.go:205] Trace[745882257]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-controller-manager-embed-certs-20210310205017-6496.166b1816b4e59e6c,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:192.168.49.97 (10-Mar-2021 21:23:39.558) (total time: 1461ms):
	* Trace[745882257]: ---"Recorded the audit event" 1396ms (21:23:00.955)
	* Trace[745882257]: [1.4615268s] [1.4615268s] END
	* I0310 21:23:48.178839       1 trace.go:205] Trace[1595143437]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:23:47.553) (total time: 624ms):
	* Trace[1595143437]: ---"initial value restored" 355ms (21:23:00.909)
	* Trace[1595143437]: ---"Transaction committed" 183ms (21:23:00.178)
	* Trace[1595143437]: [624.5916ms] [624.5916ms] END
	* I0310 21:23:51.998406       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:23:51.998678       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:23:51.998715       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [27f2d5bb4104] <==
	* I0310 21:21:12.890849       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	* I0310 21:21:12.987369       1 shared_informer.go:247] Caches are synced for attach detach 
	* I0310 21:21:13.134451       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 21:21:13.134498       1 disruption.go:339] Sending events to api server.
	* I0310 21:21:13.136876       1 shared_informer.go:247] Caches are synced for PV protection 
	* I0310 21:21:13.136945       1 shared_informer.go:247] Caches are synced for expand 
	* I0310 21:21:13.179381       1 shared_informer.go:247] Caches are synced for deployment 
	* I0310 21:21:13.275481       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	* I0310 21:21:13.321378       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I0310 21:21:13.328433       1 shared_informer.go:247] Caches are synced for endpoint 
	* I0310 21:21:13.510620       1 shared_informer.go:247] Caches are synced for stateful set 
	* I0310 21:21:13.510709       1 shared_informer.go:247] Caches are synced for crt configmap 
	* I0310 21:21:13.531894       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	* I0310 21:21:13.636833       1 shared_informer.go:247] Caches are synced for daemon sets 
	* I0310 21:21:13.965946       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	* I0310 21:21:13.966139       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	* I0310 21:21:13.966423       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	* I0310 21:21:13.966634       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	* I0310 21:21:13.994929       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	* I0310 21:21:14.617055       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:21:14.624027       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:21:31.336020       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 21:21:32.957097       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:21:32.987671       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:21:32.987728       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* 
	* ==> kube-proxy [4913edb02239] <==
	* I0310 21:05:44.319340       1 node.go:172] Successfully retrieved node IP: 192.168.49.97
	* I0310 21:05:44.322280       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.97), assume IPv4 operation
	* W0310 21:05:45.448045       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I0310 21:05:45.448225       1 server_others.go:185] Using iptables Proxier.
	* I0310 21:05:45.577717       1 server.go:650] Version: v1.20.2
	* I0310 21:05:45.586431       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I0310 21:05:45.627151       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0310 21:05:45.627674       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0310 21:05:45.689461       1 config.go:315] Starting service config controller
	* I0310 21:05:45.689665       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I0310 21:05:45.633250       1 config.go:224] Starting endpoint slice config controller
	* I0310 21:05:45.711942       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I0310 21:05:45.821612       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I0310 21:05:45.905248       1 shared_informer.go:247] Caches are synced for service config 
	* I0310 21:06:02.480102       1 trace.go:205] Trace[991622159]: "iptables restore" (10-Mar-2021 21:06:00.067) (total time: 2412ms):
	* Trace[991622159]: [2.4121989s] [2.4121989s] END
	* 
	* ==> kube-scheduler [f4b5b19203e9] <==
	* I0310 21:19:33.778321       1 trace.go:205] Trace[1941911365]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Mar-2021 21:19:20.022) (total time: 13755ms):
	* Trace[1941911365]: [13.7559214s] [13.7559214s] END
	* E0310 21:19:33.778368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* I0310 21:19:33.787323       1 trace.go:205] Trace[413556511]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Mar-2021 21:19:16.271) (total time: 17515ms):
	* Trace[413556511]: [17.5158426s] [17.5158426s] END
	* E0310 21:19:33.787354       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* I0310 21:19:33.787671       1 trace.go:205] Trace[742098341]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Mar-2021 21:19:16.363) (total time: 17423ms):
	* Trace[742098341]: [17.4239843s] [17.4239843s] END
	* E0310 21:19:33.787693       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* I0310 21:19:33.796451       1 trace.go:205] Trace[1954452167]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Mar-2021 21:19:20.756) (total time: 13040ms):
	* Trace[1954452167]: [13.0401694s] [13.0401694s] END
	* E0310 21:19:33.796474       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* I0310 21:19:33.796623       1 trace.go:205] Trace[1286327852]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Mar-2021 21:19:17.413) (total time: 16382ms):
	* Trace[1286327852]: [16.3826156s] [16.3826156s] END
	* E0310 21:19:33.796643       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* I0310 21:19:33.808871       1 trace.go:205] Trace[281986938]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Mar-2021 21:19:17.656) (total time: 16109ms):
	* Trace[281986938]: [16.1092526s] [16.1092526s] END
	* E0310 21:19:33.808911       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* I0310 21:19:33.809351       1 trace.go:205] Trace[1259346462]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Mar-2021 21:19:15.750) (total time: 18059ms):
	* Trace[1259346462]: [18.0591852s] [18.0591852s] END
	* E0310 21:19:33.809375       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* I0310 21:19:33.832127       1 trace.go:205] Trace[1399962802]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Mar-2021 21:19:21.132) (total time: 12699ms):
	* Trace[1399962802]: [12.6997456s] [12.6997456s] END
	* E0310 21:19:33.832168       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* I0310 21:19:51.024425       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 21:13:03 UTC, end at Wed 2021-03-10 21:24:24 UTC. --
	* Mar 10 21:20:42 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:20:42.476607    1501 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/2659761d-6d3f-43ea-b1d9-04ec50811e6f-storage-provisioner-token-vq94d podName:2659761d-6d3f-43ea-b1d9-04ec50811e6f nodeName:}" failed. No retries permitted until 2021-03-10 21:20:41.4205374 +0000 UTC m=+311.095625801 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-vq94d\" (UniqueName: \"kubernetes.io/secret/2659761d-6d3f-43ea-b1d9-04ec50811e6f-storage-provisioner-token-vq94d\") pod \"storage-provisioner\" (UID: \"2659761d-6d3f-43ea-b1d9-04ec50811e6f\") : failed to sync secret cache: timed out waiting for the condition"
	* Mar 10 21:20:43 embed-certs-20210310205017-6496 kubelet[1501]: I0310 21:20:43.582053    1501 trace.go:205] Trace[1021266628]: "Reflector ListAndWatch" name:object-"default"/"default-token-9nfw5" (10-Mar-2021 21:20:17.887) (total time: 16017ms):
	* Mar 10 21:20:43 embed-certs-20210310205017-6496 kubelet[1501]: Trace[1021266628]: ---"Objects listed" 16017ms (21:20:00.904)
	* Mar 10 21:20:43 embed-certs-20210310205017-6496 kubelet[1501]: Trace[1021266628]: [16.0174442s] [16.0174442s] END
	* Mar 10 21:20:55 embed-certs-20210310205017-6496 kubelet[1501]: I0310 21:20:55.665047    1501 trace.go:205] Trace[25258403]: "iptables Monitor CANARY check" (10-Mar-2021 21:20:49.214) (total time: 6450ms):
	* Mar 10 21:20:55 embed-certs-20210310205017-6496 kubelet[1501]: Trace[25258403]: [6.4505051s] [6.4505051s] END
	* Mar 10 21:21:05 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:21:04.792975    1501 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-20210310205017-6496?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
	* Mar 10 21:21:09 embed-certs-20210310205017-6496 kubelet[1501]: W0310 21:21:09.289095    1501 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 21:21:12 embed-certs-20210310205017-6496 kubelet[1501]: W0310 21:21:12.239076    1501 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 21:21:14 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:21:14.281093    1501 controller.go:187] failed to update lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "embed-certs-20210310205017-6496": the object has been modified; please apply your changes to the latest version and try again
	* Mar 10 21:21:20 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:21:20.295373    1501 kubelet.go:1638] Failed creating a mirror pod for "etcd-embed-certs-20210310205017-6496_kube-system(b7385372feec82bd4bdc625fc929c483)": pods "etcd-embed-certs-20210310205017-6496" already exists
	* Mar 10 21:21:26 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:21:26.793072    1501 kubelet.go:1638] Failed creating a mirror pod for "kube-controller-manager-embed-certs-20210310205017-6496_kube-system(57b8c22dbe6410e4bd36cf14b0f8bdc7)": pods "kube-controller-manager-embed-certs-20210310205017-6496" already exists
	* Mar 10 21:22:05 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:22:05.771184    1501 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod2659761d-6d3f-43ea-b1d9-04ec50811e6f": RecentStats: unable to find data in memory cache]
	* Mar 10 21:22:53 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:22:53.923769    1501 kubelet_node_status.go:447] Error updating node status, will retry: error getting node "embed-certs-20210310205017-6496": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/embed-certs-20210310205017-6496?resourceVersion=0&timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
	* Mar 10 21:23:00 embed-certs-20210310205017-6496 kubelet[1501]: I0310 21:23:00.043499    1501 trace.go:205] Trace[497352887]: "iptables Monitor CANARY check" (10-Mar-2021 21:22:47.398) (total time: 12644ms):
	* Mar 10 21:23:00 embed-certs-20210310205017-6496 kubelet[1501]: Trace[497352887]: [12.6448181s] [12.6448181s] END
	* Mar 10 21:23:44 embed-certs-20210310205017-6496 kubelet[1501]: W0310 21:23:44.388495    1501 pod_container_deletor.go:79] Container "21d925bf6a5be5da42ec2fc713bedec67d54b7ef5ca83054e6ca7767ab0d8943" not found in pod's containers
	* Mar 10 21:23:46 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:23:46.519625    1501 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox": operation timeout: context deadline exceeded
	* Mar 10 21:23:46 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:23:46.709854    1501 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "busybox_default(6db164f1-ae24-4e17-af81-56bd01054888)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox": operation timeout: context deadline exceeded
	* Mar 10 21:23:46 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:23:46.710187    1501 kuberuntime_manager.go:755] createPodSandbox for pod "busybox_default(6db164f1-ae24-4e17-af81-56bd01054888)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox": operation timeout: context deadline exceeded
	* Mar 10 21:23:46 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:23:46.710601    1501 pod_workers.go:191] Error syncing pod 6db164f1-ae24-4e17-af81-56bd01054888 ("busybox_default(6db164f1-ae24-4e17-af81-56bd01054888)"), skipping: failed to "CreatePodSandbox" for "busybox_default(6db164f1-ae24-4e17-af81-56bd01054888)" with CreatePodSandboxError: "CreatePodSandbox for pod \"busybox_default(6db164f1-ae24-4e17-af81-56bd01054888)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"busybox\": operation timeout: context deadline exceeded"
	* Mar 10 21:23:47 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:23:47.021896    1501 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-74ff55c5b-4w6mn": operation timeout: context deadline exceeded
	* Mar 10 21:23:47 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:23:47.023323    1501 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-4w6mn_kube-system(0b339996-09da-4e8b-82cb-967e22a2b12a)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-74ff55c5b-4w6mn": operation timeout: context deadline exceeded
	* Mar 10 21:23:47 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:23:47.023868    1501 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-4w6mn_kube-system(0b339996-09da-4e8b-82cb-967e22a2b12a)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "coredns-74ff55c5b-4w6mn": operation timeout: context deadline exceeded
	* Mar 10 21:23:47 embed-certs-20210310205017-6496 kubelet[1501]: E0310 21:23:47.025369    1501 pod_workers.go:191] Error syncing pod 0b339996-09da-4e8b-82cb-967e22a2b12a ("coredns-74ff55c5b-4w6mn_kube-system(0b339996-09da-4e8b-82cb-967e22a2b12a)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-4w6mn_kube-system(0b339996-09da-4e8b-82cb-967e22a2b12a)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-4w6mn_kube-system(0b339996-09da-4e8b-82cb-967e22a2b12a)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"coredns-74ff55c5b-4w6mn\": operation timeout: context deadline exceeded"
	* 
	* ==> storage-provisioner [765eeaf3ce81] <==
	* I0310 21:07:35.160768       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 21:07:37.346480       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 21:07:37.591131       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 21:07:38.971054       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 21:07:38.974891       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210310205017-6496_a341de06-e553-4b94-8067-b98e4114ac4d!
	* I0310 21:07:38.975098       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"31bb2ba0-7622-4a2f-8771-6a04779c1650", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210310205017-6496_a341de06-e553-4b94-8067-b98e4114ac4d became leader
	* I0310 21:07:40.075875       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210310205017-6496_a341de06-e553-4b94-8067-b98e4114ac4d!
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                      Args                      |                    Profile                     |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| -p      | docker-flags-20210310201637-6496               | docker-flags-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:47:18 GMT | Wed, 10 Mar 2021 20:49:03 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | docker-flags-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:21 GMT | Wed, 10 Mar 2021 20:49:47 GMT |
	|         | docker-flags-20210310201637-6496               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | force-systemd-env-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:41 GMT | Wed, 10 Mar 2021 20:50:17 GMT |
	|         | force-systemd-env-20210310201637-6496          |                                                |                         |         |                               |                               |
	| -p      | cert-options-20210310203249-6496               | cert-options-20210310203249-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:50:36 GMT | Wed, 10 Mar 2021 20:50:43 GMT |
	|         | ssh openssl x509 -text -noout -in              |                                                |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt          |                                                |                         |         |                               |                               |
	| delete  | -p                                             | cert-options-20210310203249-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:10 GMT | Wed, 10 Mar 2021 20:51:56 GMT |
	|         | cert-options-20210310203249-6496               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | disable-driver-mounts-20210310205156-6496      | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:57 GMT | Wed, 10 Mar 2021 20:52:02 GMT |
	|         | disable-driver-mounts-20210310205156-6496      |                                                |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210310203447-6496         | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:53:03 GMT | Wed, 10 Mar 2021 20:53:44 GMT |
	|         | ssh docker info --format                       |                                                |                         |         |                               |                               |
	|         |                               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:54:07 GMT | Wed, 10 Mar 2021 20:54:36 GMT |
	|         | force-systemd-flag-20210310203447-6496         |                                                |                         |         |                               |                               |
	| stop    | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:19 GMT | Wed, 10 Mar 2021 21:02:40 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:42 GMT | Wed, 10 Mar 2021 21:02:42 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496                | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:07:05 GMT | Wed, 10 Mar 2021 21:08:33 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| start   | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:52:21 GMT | Wed, 10 Mar 2021 21:09:23 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                |                                                |                         |         |                               |                               |
	|         | -v=1 --driver=docker                           |                                                |                         |         |                               |                               |
	| logs    | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:09:23 GMT | Wed, 10 Mar 2021 21:10:51 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:10:52 GMT | Wed, 10 Mar 2021 21:11:13 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | running-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:45 GMT | Wed, 10 Mar 2021 21:12:11 GMT |
	|         | running-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| stop    | -p                                             | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:12:38 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:40 GMT | Wed, 10 Mar 2021 21:12:41 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	| -p      | kubernetes-upgrade-20210310201637-6496         | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:50 GMT | Wed, 10 Mar 2021 21:15:02 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:15 GMT | Wed, 10 Mar 2021 21:15:46 GMT |
	|         | kubernetes-upgrade-20210310201637-6496         |                                                |                         |         |                               |                               |
	| delete  | -p                                             | missing-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:38 GMT | Wed, 10 Mar 2021 21:16:03 GMT |
	|         | missing-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| -p      | default-k8s-different-port-20210310205202-6496 | default-k8s-different-port-20210310205202-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:16:15 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| stop    | -p                                             | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:57 GMT | Wed, 10 Mar 2021 21:16:31 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:16:33 GMT | Wed, 10 Mar 2021 21:16:34 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	| delete  | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:18:53 GMT | Wed, 10 Mar 2021 21:19:16 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:20:59 GMT | Wed, 10 Mar 2021 21:21:26 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 21:21:26
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 21:21:26.714460   21944 out.go:239] Setting OutFile to fd 2916 ...
	* I0310 21:21:26.716415   21944 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:21:26.716415   21944 out.go:252] Setting ErrFile to fd 1604...
	* I0310 21:21:26.716415   21944 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:21:26.737886   21944 out.go:246] Setting JSON to false
	* I0310 21:21:26.745279   21944 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36752,"bootTime":1615374534,"procs":115,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 21:21:26.745279   21944 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 21:21:26.754934   21944 out.go:129] * [enable-default-cni-20210310212126-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 21:21:26.758395   21944 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 21:21:26.772793   21944 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 21:21:27.384722   21944 docker.go:119] docker version: linux-20.10.2
	* I0310 21:21:27.389877   21944 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:21:28.483849   21944 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.093974s)
	* I0310 21:21:28.485371   21944 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:21:27.9947811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:21:28.489817   21944 out.go:129] * Using the docker driver based on user configuration
	* I0310 21:21:28.489817   21944 start.go:276] selected driver: docker
	* I0310 21:21:28.490029   21944 start.go:718] validating driver "docker" against <nil>
	* I0310 21:21:28.490119   21944 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 21:21:29.636511   21944 out.go:129] 
	* W0310 21:21:29.647232   21944 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	* W0310 21:21:29.647947   21944 out.go:191] * Suggestion: 
	* 
	*     1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	*     2. Click "Settings"
	*     3. Click "Resources"
	*     4. Increase "Memory" slider bar to 2.25 GB or higher
	*     5. Click "Apply & Restart"
	* W0310 21:21:29.648223   21944 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* I0310 21:21:29.648223   21944 out.go:129] 
	* I0310 21:21:29.663940   21944 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:21:30.712365   21944 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0479583s)
	* I0310 21:21:30.712542   21944 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:21:30.2444337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:21:30.713121   21944 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* E0310 21:21:30.713374   21944 start_flags.go:305] Found deprecated --enable-default-cni flag, setting --cni=bridge
	* I0310 21:21:30.713987   21944 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 21:21:30.714132   21944 cni.go:74] Creating CNI manager for "bridge"
	* I0310 21:21:30.714241   21944 start_flags.go:393] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	* I0310 21:21:30.714241   21944 start_flags.go:398] config:
	* {Name:enable-default-cni-20210310212126-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:enable-default-cni-20210310212126-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:21:30.718739   21944 out.go:129] * Starting control plane node enable-default-cni-20210310212126-6496 in cluster enable-default-cni-20210310212126-6496
	* I0310 21:21:31.344953   21944 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 21:21:31.344953   21944 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 21:21:31.345122   21944 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:21:31.345122   21944 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:21:31.345122   21944 cache.go:54] Caching tarball of preloaded images
	* I0310 21:21:31.345807   21944 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 21:21:31.345807   21944 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 21:21:31.346030   21944 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\config.json ...
	* I0310 21:21:31.346760   21944 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\config.json: {Name:mk2c1a6e70d43e87f7eeb5c00e9f41054018ea23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:21:31.368646   21944 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 21:21:31.369501   21944 start.go:313] acquiring machines lock for enable-default-cni-20210310212126-6496: {Name:mkd52dedd9ad8be56cb2110413d2aa38ec0daf20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:21:31.369862   21944 start.go:317] acquired machines lock for "enable-default-cni-20210310212126-6496" in 360.9??s
	* I0310 21:21:31.370277   21944 start.go:89] Provisioning new machine with config: &{Name:enable-default-cni-20210310212126-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:enable-default-cni-20210310212126-6496 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 21:21:31.370627   21944 start.go:126] createHost starting for "" (driver="docker")
	* I0310 21:21:31.223845   13364 docker.go:388] Took 71.570323 seconds to copy over tarball
	* I0310 21:21:31.244793   13364 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	* I0310 21:21:31.374111   21944 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	* I0310 21:21:31.375270   21944 start.go:160] libmachine.API.Create for "enable-default-cni-20210310212126-6496" (driver="docker")
	* I0310 21:21:31.375628   21944 client.go:168] LocalClient.Create starting
	* I0310 21:21:31.376674   21944 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	* I0310 21:21:31.377222   21944 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:21:31.377222   21944 main.go:121] libmachine: Parsing certificate...
	* I0310 21:21:31.377861   21944 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	* I0310 21:21:31.378154   21944 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:21:31.378154   21944 main.go:121] libmachine: Parsing certificate...
	* I0310 21:21:31.412702   21944 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210310212126-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 21:21:32.067660   21944 cli_runner.go:162] docker network inspect enable-default-cni-20210310212126-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 21:21:32.077318   21944 network_create.go:240] running [docker network inspect enable-default-cni-20210310212126-6496] to gather additional debugging logs...
	* I0310 21:21:32.077318   21944 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210310212126-6496
	* W0310 21:21:32.689406   21944 cli_runner.go:162] docker network inspect enable-default-cni-20210310212126-6496 returned with exit code 1
	* I0310 21:21:32.689406   21944 network_create.go:243] error running [docker network inspect enable-default-cni-20210310212126-6496]: docker network inspect enable-default-cni-20210310212126-6496: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: enable-default-cni-20210310212126-6496
	* I0310 21:21:32.689406   21944 network_create.go:245] output of [docker network inspect enable-default-cni-20210310212126-6496]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: enable-default-cni-20210310212126-6496
	* 
	* ** /stderr **
	* I0310 21:21:32.699405   21944 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 21:21:33.351584   21944 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 21:21:33.353269   21944 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: enable-default-cni-20210310212126-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 21:21:33.358600   21944 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20210310212126-6496
	* W0310 21:21:33.958450   21944 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20210310212126-6496 returned with exit code 1
	* W0310 21:21:33.958632   21944 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	* I0310 21:21:33.985769   21944 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 21:21:34.612013   21944 cli_runner.go:115] Run: docker volume create enable-default-cni-20210310212126-6496 --label name.minikube.sigs.k8s.io=enable-default-cni-20210310212126-6496 --label created_by.minikube.sigs.k8s.io=true
	* I0310 21:21:35.181520   21944 oci.go:102] Successfully created a docker volume enable-default-cni-20210310212126-6496
	* I0310 21:21:35.191056   21944 cli_runner.go:115] Run: docker run --rm --name enable-default-cni-20210310212126-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20210310212126-6496 --entrypoint /usr/bin/test -v enable-default-cni-20210310212126-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 21:21:40.965378   21944 cli_runner.go:168] Completed: docker run --rm --name enable-default-cni-20210310212126-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20210310212126-6496 --entrypoint /usr/bin/test -v enable-default-cni-20210310212126-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (5.7738807s)
	* I0310 21:21:40.965378   21944 oci.go:106] Successfully prepared a docker volume enable-default-cni-20210310212126-6496
	* I0310 21:21:40.965378   21944 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:21:40.965796   21944 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:21:40.966165   21944 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 21:21:40.975090   21944 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20210310212126-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* I0310 21:21:40.983995   21944 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* W0310 21:21:41.680852   21944 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20210310212126-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	* I0310 21:21:41.680852   21944 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20210310212126-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	* stdout:
	* 
	* stderr:
	* docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	* 
	* The notification platform is unavailable.
	* 	���
	* 
	* ���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	*    at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	* �������?8
	* CreateToastNotifier
	* Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	* Windows.UI.Notifications.ToastNotificationManager
	* Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	* ���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	* ���+The notification platform is unavailable.
	* 	������������RestrictedErrorReference
	* 	
���
���������RestrictedCapabilitySid
	* 	������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	* See 'docker run --help'.
	* I0310 21:21:42.016850   21944 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0328561s)
	* I0310 21:21:42.016850   21944 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:21:41.5361852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:21:42.025507   21944 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 21:21:43.043062   21944 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.017556s)
	* I0310 21:21:43.050668   21944 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20210310212126-6496 --name enable-default-cni-20210310212126-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20210310212126-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20210310212126-6496 --volume enable-default-cni-20210310212126-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 21:21:43.260843   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: (23.1943659s)
	* I0310 21:21:43.261127   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 from cache
	* I0310 21:21:43.261127   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:21:43.270290   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:21:47.675878   21944 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-20210310212126-6496 --name enable-default-cni-20210310212126-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20210310212126-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-20210310212126-6496 --volume enable-default-cni-20210310212126-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (4.6252156s)
	* I0310 21:21:47.685330   21944 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210310212126-6496 --format=
	* I0310 21:21:48.279345   21944 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210310212126-6496 --format=
	* I0310 21:21:48.899366   21944 cli_runner.go:115] Run: docker exec enable-default-cni-20210310212126-6496 stat /var/lib/dpkg/alternatives/iptables
	* I0310 21:21:50.192553   21944 cli_runner.go:168] Completed: docker exec enable-default-cni-20210310212126-6496 stat /var/lib/dpkg/alternatives/iptables: (1.2931888s)
	* I0310 21:21:50.193083   21944 oci.go:278] the created container "enable-default-cni-20210310212126-6496" has a running status.
	* I0310 21:21:50.193083   21944 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\enable-default-cni-20210310212126-6496\id_rsa...
	* I0310 21:21:50.312352   21944 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\enable-default-cni-20210310212126-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 21:21:51.346711   21944 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210310212126-6496 --format=
	* W0310 21:21:51.765464   22316 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	* stdout:
	* [init] Using Kubernetes version: v1.20.2
	* [preflight] Running pre-flight checks
	* [preflight] Pulling images required for setting up a Kubernetes cluster
	* [preflight] This might take a minute or two, depending on the speed of your internet connection
	* [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	* [certs] Using certificateDir folder "/var/lib/minikube/certs"
	* [certs] Using existing ca certificate authority
	* [certs] Using existing apiserver certificate and key on disk
	* [certs] Generating "apiserver-kubelet-client" certificate and key
	* [certs] Generating "front-proxy-ca" certificate and key
	* [certs] Generating "front-proxy-client" certificate and key
	* [certs] Generating "etcd/ca" certificate and key
	* [certs] Generating "etcd/server" certificate and key
	* [certs] etcd/server serving cert is signed for DNS names [false-20210310211211-6496 localhost] and IPs [172.17.0.8 127.0.0.1 ::1]
	* [certs] Generating "etcd/peer" certificate and key
	* [certs] etcd/peer serving cert is signed for DNS names [false-20210310211211-6496 localhost] and IPs [172.17.0.8 127.0.0.1 ::1]
	* [certs] Generating "etcd/healthcheck-client" certificate and key
	* [certs] Generating "apiserver-etcd-client" certificate and key
	* [certs] Generating "sa" key and public key
	* [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	* [kubeconfig] Writing "admin.conf" kubeconfig file
	* [kubeconfig] Writing "kubelet.conf" kubeconfig file
	* [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	* [kubeconfig] Writing "scheduler.conf" kubeconfig file
	* [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	* [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	* [kubelet-start] Starting the kubelet
	* [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	* [control-plane] Creating static Pod manifest for "kube-apiserver"
	* [control-plane] Creating static Pod manifest for "kube-controller-manager"
	* [control-plane] Creating static Pod manifest for "kube-scheduler"
	* [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	* [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	* [kubelet-check] Initial timeout of 40s passed.
	* 
	* 	Unfortunately, an error has occurred:
	* 		timed out waiting for the condition
	* 
	* 	This error is likely caused by:
	* 		- The kubelet is not running
	* 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	* 
	* 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	* 		- 'systemctl status kubelet'
	* 		- 'journalctl -xeu kubelet'
	* 
	* 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	* 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	* 
	* 	Here is one example how you may list all Kubernetes containers running in docker:
	* 		- 'docker ps -a | grep kube | grep -v pause'
	* 		Once you have found the failing container, you can inspect its logs with:
	* 		- 'docker logs CONTAINERID'
	* 
	* 
	* stderr:
	* 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	* 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	* 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	* 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
	* 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	* error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	* To see the stack trace of this error execute with --v=5 or higher
	* 
	* I0310 21:21:51.766143   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	* I0310 21:21:51.300378   16712 out.go:150]   - Booting up control plane ...
	* I0310 21:21:52.014139   21944 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	* I0310 21:21:52.014139   21944 kic_runner.go:115] Args: [docker exec --privileged enable-default-cni-20210310212126-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	* I0310 21:21:53.404169   21944 kic_runner.go:124] Done: [docker exec --privileged enable-default-cni-20210310212126-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3900319s)
	* I0310 21:21:53.408659   21944 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\enable-default-cni-20210310212126-6496\id_rsa...
	* I0310 21:21:54.253675   21944 cli_runner.go:115] Run: docker container inspect enable-default-cni-20210310212126-6496 --format=
	* I0310 21:21:54.846469   21944 machine.go:88] provisioning docker machine ...
	* I0310 21:21:54.846887   21944 ubuntu.go:169] provisioning hostname "enable-default-cni-20210310212126-6496"
	* I0310 21:21:54.854742   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:21:55.483804   21944 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:21:55.484891   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}
	* I0310 21:21:55.484891   21944 main.go:121] libmachine: About to run SSH command:
	* sudo hostname enable-default-cni-20210310212126-6496 && echo "enable-default-cni-20210310212126-6496" | sudo tee /etc/hostname
	* I0310 21:21:55.495004   21944 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:21:59.723989   21944 main.go:121] libmachine: SSH cmd err, output: <nil>: enable-default-cni-20210310212126-6496
	* 
	* I0310 21:21:59.732328   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:22:00.355262   21944 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:22:00.356349   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}
	* I0310 21:22:00.356581   21944 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\senable-default-cni-20210310212126-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-20210310212126-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 enable-default-cni-20210310212126-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:22:01.168300   21944 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:22:01.168621   21944 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:22:01.168621   21944 ubuntu.go:177] setting up certificates
	* I0310 21:22:01.168621   21944 provision.go:83] configureAuth start
	* I0310 21:22:01.178498   21944 cli_runner.go:115] Run: docker container inspect -f "" enable-default-cni-20210310212126-6496
	* I0310 21:22:01.785368   21944 provision.go:137] copyHostCerts
	* I0310 21:22:01.786128   21944 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:22:01.786128   21944 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:22:01.786568   21944 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:22:01.797422   21944 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:22:01.797422   21944 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:22:01.797875   21944 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:22:01.800802   21944 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:22:01.800802   21944 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:22:01.801403   21944 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:22:01.803802   21944 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.enable-default-cni-20210310212126-6496 san=[172.17.0.7 127.0.0.1 localhost 127.0.0.1 minikube enable-default-cni-20210310212126-6496]
	* I0310 21:22:02.615586   21944 provision.go:165] copyRemoteCerts
	* I0310 21:22:02.627566   21944 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:22:02.634678   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:22:03.263326   21944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55208 SSHKeyPath:C:\Users\jenkins\.minikube\machines\enable-default-cni-20210310212126-6496\id_rsa Username:docker}
	* I0310 21:22:03.876767   21944 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2485968s)
	* I0310 21:22:03.877819   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:22:04.180918   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1285 bytes)
	* I0310 21:22:04.468713   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 21:22:04.732739   21944 provision.go:86] duration metric: configureAuth took 3.5638515s
	* I0310 21:22:04.732936   21944 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:22:04.749524   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:22:05.343310   21944 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:22:05.343894   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}
	* I0310 21:22:05.343894   21944 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:22:06.188072   21944 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:22:06.188804   21944 ubuntu.go:71] root file system type: overlay
	* I0310 21:22:06.191032   21944 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:22:06.198918   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:22:06.633173   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: (23.3629134s)
	* I0310 21:22:06.633404   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 from cache
	* I0310 21:22:06.633404   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:22:06.641408   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:22:06.782659   21944 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:22:06.782659   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}
	* I0310 21:22:06.782659   21944 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:22:07.603886   21944 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:22:07.603886   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:22:08.211395   21944 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:22:08.212166   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}
	* I0310 21:22:08.212573   21944 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 21:22:10.790555   13364 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (39.5455445s)
	* I0310 21:22:10.791081   13364 ssh_runner.go:100] rm: /preloaded.tar.lz4
	* I0310 21:22:12.320483   13364 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:22:12.375854   13364 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	* I0310 21:22:12.480425   13364 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:22:14.010615   13364 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.5301924s)
	* I0310 21:22:14.029519   13364 ssh_runner.go:149] Run: sudo systemctl restart docker
	* I0310 21:22:18.457850   21944 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 21:22:07.587157000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 21:22:18.458230   21944 machine.go:91] provisioned docker machine in 23.6109936s
	* I0310 21:22:18.458230   21944 client.go:171] LocalClient.Create took 47.0824419s
	* I0310 21:22:18.458230   21944 start.go:168] duration metric: libmachine.API.Create for "enable-default-cni-20210310212126-6496" took 47.0830204s
	* I0310 21:22:18.458230   21944 start.go:267] post-start starting for "enable-default-cni-20210310212126-6496" (driver="docker")
	* I0310 21:22:18.458230   21944 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:22:18.470243   21944 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:22:18.478050   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:22:19.062131   21944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55208 SSHKeyPath:C:\Users\jenkins\.minikube\machines\enable-default-cni-20210310212126-6496\id_rsa Username:docker}
	* I0310 21:22:19.380948   21944 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:22:19.431050   21944 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:22:19.431050   21944 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:22:19.431050   21944 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:22:19.431050   21944 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:22:19.431310   21944 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:22:19.431642   21944 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:22:19.435409   21944 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:22:19.437278   21944 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:22:19.450312   21944 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:22:19.556667   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:22:19.867795   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:22:20.030839   21944 start.go:270] post-start completed in 1.5726113s
	* I0310 21:22:20.070124   21944 cli_runner.go:115] Run: docker container inspect -f "" enable-default-cni-20210310212126-6496
	* I0310 21:22:20.634771   21944 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\config.json ...
	* I0310 21:22:20.672752   21944 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:22:20.693865   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:22:21.317057   21944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55208 SSHKeyPath:C:\Users\jenkins\.minikube\machines\enable-default-cni-20210310212126-6496\id_rsa Username:docker}
	* I0310 21:22:18.780250   13364 ssh_runner.go:189] Completed: sudo systemctl restart docker: (4.7507375s)
	* I0310 21:22:18.788940   13364 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:22:19.490965   13364 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 21:22:19.490965   13364 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 21:22:19.500276   13364 ssh_runner.go:149] Run: docker info --format 
	* I0310 21:22:21.033611   13364 ssh_runner.go:189] Completed: docker info --format : (1.5333366s)
	* I0310 21:22:21.033611   13364 cni.go:74] Creating CNI manager for "testdata\\weavenet.yaml"
	* I0310 21:22:21.033611   13364 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 21:22:21.033611   13364 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210310211916-6496 NodeName:custom-weave-20210310211916-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 21:22:21.033611   13364 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 172.17.0.3
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "custom-weave-20210310211916-6496"
	*   kubeletExtraArgs:
	*     node-ip: 172.17.0.3
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 21:22:21.033611   13364 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20210310211916-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=172.17.0.3
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210310211916-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	* I0310 21:22:21.046453   13364 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 21:22:21.124212   13364 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 21:22:21.137213   13364 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 21:22:21.190296   13364 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	* I0310 21:22:21.461113   13364 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 21:22:21.681233   13364 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1858 bytes)
	* I0310 21:22:20.793892   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: (14.1522317s)
	* I0310 21:22:20.794091   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 from cache
	* I0310 21:22:20.794091   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:22:20.810528   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:22:21.888221   21944 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2154709s)
	* I0310 21:22:21.888221   21944 start.go:129] duration metric: createHost completed in 50.51766s
	* I0310 21:22:21.888221   21944 start.go:80] releasing machines lock for "enable-default-cni-20210310212126-6496", held for 50.5181874s
	* I0310 21:22:21.888899   21944 cli_runner.go:115] Run: docker container inspect -f "" enable-default-cni-20210310212126-6496
	* I0310 21:22:22.498050   21944 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:22:22.506519   21944 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:22:22.510256   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:22:22.518317   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:22:23.159776   21944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55208 SSHKeyPath:C:\Users\jenkins\.minikube\machines\enable-default-cni-20210310212126-6496\id_rsa Username:docker}
	* I0310 21:22:23.168320   21944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55208 SSHKeyPath:C:\Users\jenkins\.minikube\machines\enable-default-cni-20210310212126-6496\id_rsa Username:docker}
	* I0310 21:22:23.574994   21944 ssh_runner.go:189] Completed: systemctl --version: (1.0679463s)
	* I0310 21:22:23.586592   21944 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:22:23.897738   21944 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:22:23.898382   21944 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3993341s)
	* I0310 21:22:23.990229   21944 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:22:23.999885   21944 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:22:24.158558   21944 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:22:24.315607   21944 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:22:24.471772   21944 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:22:25.563670   21944 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0916132s)
	* I0310 21:22:25.575063   21944 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:22:25.745647   21944 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:22:21.968648   13364 ssh_runner.go:149] Run: grep 172.17.0.3	control-plane.minikube.internal$ /etc/hosts
	* I0310 21:22:22.007975   13364 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:22:22.153827   13364 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496 for IP: 172.17.0.3
	* I0310 21:22:22.154726   13364 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 21:22:22.154944   13364 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 21:22:22.155575   13364 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\client.key
	* I0310 21:22:22.155575   13364 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key.0f3e66d0
	* I0310 21:22:22.155575   13364 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
	* I0310 21:22:22.330533   13364 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt.0f3e66d0 ...
	* I0310 21:22:22.330533   13364 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt.0f3e66d0: {Name:mk5acae756c7ccf08a5abecb3d42de42a2545e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:22:22.343482   13364 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key.0f3e66d0 ...
	* I0310 21:22:22.343482   13364 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key.0f3e66d0: {Name:mke141e6877e8cb0d6dc54cf3f0c258a20436c9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:22:22.357477   13364 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt.0f3e66d0 -> C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt
	* I0310 21:22:22.366446   13364 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key.0f3e66d0 -> C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key
	* I0310 21:22:22.368498   13364 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.key
	* I0310 21:22:22.368498   13364 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.crt with IP's: []
	* I0310 21:22:22.767034   13364 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.crt ...
	* I0310 21:22:22.768057   13364 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.crt: {Name:mk50e1c1e41ae2b5998e732bb36f4a98e1150878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:22:22.782763   13364 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.key ...
	* I0310 21:22:22.782763   13364 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.key: {Name:mkfae7f926198d4da9748a25691dc664331e7799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:22:22.796137   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 21:22:22.797050   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.797050   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 21:22:22.797050   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.797050   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 21:22:22.797050   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.797050   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 21:22:22.798139   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.798139   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 21:22:22.798139   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.798139   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 21:22:22.799063   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.799063   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 21:22:22.799063   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.799063   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 21:22:22.799063   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.799063   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 21:22:22.800075   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.800075   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 21:22:22.800075   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.800075   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 21:22:22.801035   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.801035   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 21:22:22.801035   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.801035   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 21:22:22.801035   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.802033   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 21:22:22.802033   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.802033   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 21:22:22.803086   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.803086   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 21:22:22.803086   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.803086   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 21:22:22.804029   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.804029   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 21:22:22.804029   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.804029   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 21:22:22.804029   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.804029   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 21:22:22.804029   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.804029   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 21:22:22.804029   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.804029   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 21:22:22.807071   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.807268   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 21:22:22.807882   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.808120   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 21:22:22.808966   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.809365   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 21:22:22.809652   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.809875   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 21:22:22.810107   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.810107   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 21:22:22.810107   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.810721   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 21:22:22.810721   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.810721   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 21:22:22.810721   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.810721   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 21:22:22.810721   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.811887   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 21:22:22.811887   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.811887   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 21:22:22.811887   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.811887   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 21:22:22.812885   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.813023   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 21:22:22.813307   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.813307   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 21:22:22.813650   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.813650   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 21:22:22.813879   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.814119   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 21:22:22.814358   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.814592   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 21:22:22.814895   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.815073   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 21:22:22.815305   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 21:22:22.815305   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 21:22:22.815801   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 21:22:22.816162   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 21:22:22.816492   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 21:22:22.837387   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 21:22:23.263988   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	* I0310 21:22:23.635324   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 21:22:23.818552   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	* I0310 21:22:24.062537   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 21:22:24.424135   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 21:22:24.640634   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 21:22:24.882951   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 21:22:25.204449   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 21:22:25.455567   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 21:22:25.706160   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 21:22:25.925787   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 21:22:26.305402   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 21:22:26.634666   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 21:22:26.835996   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 21:22:24.222144   12868 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m9.1217604s)
	* I0310 21:22:24.242086   12868 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	* I0310 21:22:24.404126   12868 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:22:24.956986   12868 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 21:22:24.979600   12868 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 21:22:25.061114   12868 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 21:22:25.061487   12868 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 21:22:26.647856   21944 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 21:22:26.655168   21944 cli_runner.go:115] Run: docker exec -t enable-default-cni-20210310212126-6496 dig +short host.docker.internal
	* I0310 21:22:29.007024   21944 cli_runner.go:168] Completed: docker exec -t enable-default-cni-20210310212126-6496 dig +short host.docker.internal: (2.3518591s)
	* I0310 21:22:29.007024   21944 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:22:29.028465   21944 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:22:29.057105   21944 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:22:29.146628   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	* I0310 21:22:29.732157   21944 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\client.crt
	* I0310 21:22:29.737564   21944 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\client.key
	* I0310 21:22:29.741568   21944 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:22:29.741568   21944 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:22:29.753571   21944 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:22:30.217983   21944 docker.go:423] Got preloaded images: 
	* I0310 21:22:30.219011   21944 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	* I0310 21:22:30.234631   21944 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:22:30.376094   21944 ssh_runner.go:149] Run: which lz4
	* I0310 21:22:30.428823   21944 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 21:22:30.451353   21944 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 21:22:30.451549   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	* I0310 21:22:27.050998   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 21:22:27.223415   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 21:22:27.426543   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 21:22:27.654988   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 21:22:27.899286   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 21:22:28.100239   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 21:22:28.282342   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 21:22:28.471855   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 21:22:28.747259   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 21:22:29.089713   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 21:22:29.327793   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 21:22:29.618239   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 21:22:30.004114   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 21:22:30.346341   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 21:22:30.821502   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 21:22:31.192475   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 21:22:31.656161   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 21:22:32.325854   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 21:22:32.911695   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 21:22:33.337187   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 21:22:33.963719   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 21:22:34.499332   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 21:22:34.874789   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 21:22:35.422459   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 21:22:36.389758   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 21:22:36.933562   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 21:22:37.412611   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 21:22:38.178541   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 21:22:38.615336   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 21:22:39.168104   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 21:22:39.689450   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 21:22:40.165689   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 21:22:40.695582   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 21:22:41.165600   13364 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 21:22:41.552859   13364 ssh_runner.go:149] Run: openssl version
	* I0310 21:22:41.777105   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 21:22:41.926753   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 21:22:41.986600   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 21:22:41.997922   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 21:22:42.063601   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:42.163300   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 21:22:42.384078   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 21:22:42.446697   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 21:22:42.466491   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 21:22:42.535119   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:42.610161   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 21:22:42.722635   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 21:22:42.760428   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 21:22:42.774716   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 21:22:42.926802   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:43.085994   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 21:22:43.269629   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 21:22:43.374650   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 21:22:43.384874   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 21:22:43.484981   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:43.781843   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 21:22:43.988511   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 21:22:44.129041   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 21:22:44.144334   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 21:22:44.218116   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:44.343619   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 21:22:44.536578   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 21:22:44.588918   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 21:22:44.603238   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 21:22:44.696050   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:44.819524   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 21:22:45.093945   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 21:22:45.148696   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 21:22:45.157744   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 21:22:45.276300   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:45.392769   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 21:22:45.605992   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 21:22:45.661228   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 21:22:45.680881   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 21:22:45.810710   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:45.951546   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 21:22:46.250690   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 21:22:46.301037   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 21:22:46.308977   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 21:22:46.392784   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:46.656074   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 21:22:46.760290   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 21:22:46.804172   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 21:22:46.806402   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 21:22:46.945468   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:47.146473   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 21:22:47.361609   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 21:22:47.412987   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 21:22:47.425300   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 21:22:47.507357   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:47.757400   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 21:22:47.892814   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 21:22:47.996452   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 21:22:48.009865   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 21:22:48.118211   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:48.264529   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 21:22:48.436557   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 21:22:48.543497   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 21:22:48.553910   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 21:22:48.626779   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:48.830596   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 21:22:49.030465   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 21:22:49.078647   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 21:22:49.086240   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 21:22:49.166763   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:49.318888   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 21:22:49.476594   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 21:22:49.515252   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 21:22:49.527649   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 21:22:49.611517   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:49.826922   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 21:22:50.010711   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 21:22:50.051992   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 21:22:50.063484   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 21:22:50.179127   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:50.400879   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 21:22:50.688255   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 21:22:50.784984   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 21:22:50.797007   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 21:22:50.913124   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:51.138628   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 21:22:51.528521   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 21:22:51.579284   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 21:22:51.584040   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 21:22:51.676314   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:51.820975   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 21:22:51.958989   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 21:22:52.039677   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 21:22:52.050090   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 21:22:52.119895   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:52.266176   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 21:22:52.418489   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 21:22:52.501201   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 21:22:52.504573   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 21:22:52.574585   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:52.740920   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 21:22:52.954776   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 21:22:53.001390   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 21:22:53.012934   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 21:22:53.111366   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:53.236972   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 21:22:53.378106   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 21:22:53.464106   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 21:22:53.473192   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 21:22:53.577350   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:53.914941   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 21:22:54.197456   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 21:22:54.248004   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 21:22:54.250832   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 21:22:54.313756   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:54.604905   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 21:22:54.868486   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 21:22:54.948334   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 21:22:54.959678   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 21:22:55.057177   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:55.193122   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 21:22:55.436221   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 21:22:55.492049   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 21:22:55.511433   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 21:22:55.645642   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:55.791420   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 21:22:55.937298   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 21:22:56.052840   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 21:22:56.072405   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 21:22:56.137091   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:56.294837   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 21:22:56.489389   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 21:22:56.522837   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 21:22:56.528162   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 21:22:56.608312   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:56.789980   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 21:22:56.905970   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 21:22:56.987596   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 21:22:57.004702   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 21:22:57.169006   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:57.304936   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 21:22:57.422484   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 21:22:57.487023   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 21:22:57.499727   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 21:22:57.626170   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:57.969142   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 21:22:58.179902   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 21:22:58.226961   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 21:22:58.230416   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 21:22:58.346611   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:58.537128   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 21:22:58.712574   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 21:22:58.750098   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 21:22:58.758607   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 21:22:58.881306   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:58.975615   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 21:22:59.235982   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 21:22:59.370660   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 21:22:59.387562   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 21:22:59.488020   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 21:22:59.758887   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 21:22:59.992730   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 21:23:00.115413   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 21:23:00.126577   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 21:23:00.236295   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 21:23:00.311321   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 21:23:00.425330   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 21:23:00.491633   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 21:23:00.503334   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 21:23:00.636170   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 21:23:00.811090   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 21:23:00.988716   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 21:23:01.078451   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 21:23:01.089514   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 21:23:01.295502   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 21:23:01.475126   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 21:23:01.723637   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 21:23:01.784616   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 21:23:01.814956   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 21:22:59.326311   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: (38.5158326s)
	* I0310 21:22:59.326311   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 from cache
	* I0310 21:22:59.326311   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:22:59.333220   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:23:01.940157   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 21:23:02.129724   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 21:23:02.282340   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 21:23:02.331168   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 21:23:02.344425   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 21:23:02.482957   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:23:02.713632   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 21:23:02.849401   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 21:23:02.919448   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 21:23:02.921042   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 21:23:03.026805   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:23:03.176645   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 21:23:03.554205   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 21:23:03.596336   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 21:23:03.614560   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 21:23:03.827987   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 21:23:04.022880   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 21:23:04.158868   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:23:04.288603   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:23:04.306699   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:23:04.434596   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 21:23:04.602229   13364 kubeadm.go:385] StartCluster: {Name:custom-weave-20210310211916-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210310211916-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:23:04.612028   13364 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:23:05.430000   13364 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 21:23:05.622643   13364 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 21:23:05.818781   13364 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 21:23:05.831812   13364 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 21:23:06.198354   13364 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 21:23:06.198674   13364 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 21:23:22.968815   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (23.6351909s)
	* I0310 21:23:22.968815   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 from cache
	* I0310 21:23:22.968815   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:23:22.976991   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:23:32.778429   22316 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m41.0124161s)
	* I0310 21:23:32.791925   22316 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	* I0310 21:23:33.067911   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:23:33.988729   22316 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 21:23:34.003738   22316 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 21:23:35.434316   22316 ssh_runner.go:189] Completed: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: (1.4305796s)
	* I0310 21:23:35.434613   22316 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 21:23:35.434613   22316 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 21:23:36.890658   21944 docker.go:388] Took 66.484813 seconds to copy over tarball
	* I0310 21:23:36.902128   21944 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	* I0310 21:23:47.823942   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: (24.8469828s)
	* I0310 21:23:47.823942   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 from cache
	* I0310 21:23:47.823942   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:23:47.834557   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:24:02.097968   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (14.2634299s)
	* I0310 21:24:02.098723   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 from cache
	* I0310 21:24:02.098723   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:24:02.113147   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:24:12.268562   21944 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (35.366479s)
	* I0310 21:24:12.268562   21944 ssh_runner.go:100] rm: /preloaded.tar.lz4
	* I0310 21:24:13.856471   21944 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:24:13.903528   21944 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	* I0310 21:24:14.077325   21944 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:24:14.871928   21944 ssh_runner.go:149] Run: sudo systemctl restart docker
	* I0310 21:24:18.608238   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (16.4945819s)
	* I0310 21:24:18.608521   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 from cache
	* I0310 21:24:18.608677   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	* I0310 21:24:18.619219   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	* I0310 21:24:22.154857   21944 ssh_runner.go:189] Completed: sudo systemctl restart docker: (7.2817724s)
	* I0310 21:24:22.161900   21944 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:24:23.011319   21944 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 21:24:23.012486   21944 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 21:24:23.020916   21944 ssh_runner.go:149] Run: docker info --format 
	* I0310 21:24:24.603313   21944 ssh_runner.go:189] Completed: docker info --format : (1.5824087s)
	* I0310 21:24:24.603737   21944 cni.go:74] Creating CNI manager for "bridge"
	* I0310 21:24:24.603737   21944 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 21:24:24.603737   21944 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.7 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-20210310212126-6496 NodeName:enable-default-cni-20210310212126-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.7 CgroupDriver:cgroupfs Clie
ntCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 21:24:24.604504   21944 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 172.17.0.7
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "enable-default-cni-20210310212126-6496"
	*   kubeletExtraArgs:
	*     node-ip: 172.17.0.7
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "172.17.0.7"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 21:24:24.604823   21944 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=enable-default-cni-20210310212126-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=172.17.0.7
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:enable-default-cni-20210310212126-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	* I0310 21:24:24.617125   21944 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 21:24:24.690124   21944 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 21:24:24.716668   21944 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 21:24:24.779329   21944 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	* I0310 21:24:25.008357   21944 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 21:24:25.158877   21944 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1864 bytes)
	* I0310 21:24:25.281230   21944 ssh_runner.go:149] Run: grep 172.17.0.7	control-plane.minikube.internal$ /etc/hosts
	* I0310 21:24:25.326793   21944 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:24:25.430501   21944 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496 for IP: 172.17.0.7
	* I0310 21:24:25.431773   21944 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 21:24:25.432231   21944 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 21:24:25.433670   21944 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\client.key
	* I0310 21:24:25.433670   21944 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.key.d9a465bc
	* I0310 21:24:25.433670   21944 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.crt.d9a465bc with IP's: [172.17.0.7 10.96.0.1 127.0.0.1 10.0.0.1]
	* I0310 21:24:25.814793   21944 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.crt.d9a465bc ...
	* I0310 21:24:25.814793   21944 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.crt.d9a465bc: {Name:mkd700ccfda9c0abd78ce2d6755691c6bef065ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:24:25.834668   21944 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.key.d9a465bc ...
	* I0310 21:24:25.834668   21944 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.key.d9a465bc: {Name:mke48f183b8b6984c28b79cf059716fc552bc78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:24:25.855130   21944 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.crt.d9a465bc -> C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.crt
	* I0310 21:24:25.859476   21944 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.key.d9a465bc -> C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.key
	* I0310 21:24:25.864997   21944 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\proxy-client.key
	* I0310 21:24:25.865291   21944 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\proxy-client.crt with IP's: []
	* I0310 21:24:26.464345   21944 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\proxy-client.crt ...
	* I0310 21:24:26.464345   21944 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\proxy-client.crt: {Name:mkd76e11ef8f7cfcde708699b9d9d71aed4ffb52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:24:26.488190   21944 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\proxy-client.key ...
	* I0310 21:24:26.488190   21944 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\proxy-client.key: {Name:mk3d2d74f44a611e112afc975e3bd39f0baf4a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:24:26.504193   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 21:24:26.504193   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.504193   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 21:24:26.504193   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.504193   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 21:24:26.505194   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.505194   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 21:24:26.505194   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.505194   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 21:24:26.506191   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.506191   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 21:24:26.506191   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.506191   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 21:24:26.506191   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.507199   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 21:24:26.507199   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.507199   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 21:24:26.507199   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.507199   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 21:24:26.508202   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.508202   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 21:24:26.508202   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.508202   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 21:24:26.509194   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.509194   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 21:24:26.509194   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.509194   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 21:24:26.509194   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.510193   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 21:24:26.510193   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.510193   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 21:24:26.510193   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.510193   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 21:24:26.511194   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.511194   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 21:24:26.511194   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.511194   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 21:24:26.512196   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.512196   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 21:24:26.512196   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.512196   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 21:24:26.512196   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.512196   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 21:24:26.513191   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.513191   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 21:24:26.513191   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.513191   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 21:24:26.514204   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.514204   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 21:24:26.514204   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.514204   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 21:24:26.514204   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.514204   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 21:24:26.515191   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.515191   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 21:24:26.515191   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.515191   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 21:24:26.516194   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.516194   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 21:24:26.516194   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.516194   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 21:24:26.516194   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.517191   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 21:24:26.517191   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.517191   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 21:24:26.517191   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.517191   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 21:24:26.518189   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.518189   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 21:24:26.518189   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.518189   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 21:24:26.518189   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.518189   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 21:24:26.519189   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.519189   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 21:24:26.519189   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.519189   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 21:24:26.519189   21944 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 21:24:26.520203   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 21:24:26.520203   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 21:24:26.520203   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 21:24:26.520203   21944 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 21:24:26.527196   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 21:24:26.893845   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	* I0310 21:24:27.145264   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 21:24:27.340039   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\enable-default-cni-20210310212126-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	* I0310 21:24:27.602022   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 21:24:27.831812   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 21:24:28.030064   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 21:24:28.217976   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 21:24:28.413863   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 21:24:28.583540   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 21:24:28.782389   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 21:24:29.004064   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 21:24:29.181054   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 21:24:29.384386   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 21:24:29.670707   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 21:24:29.852492   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 21:24:30.067711   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 21:24:30.337747   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 21:24:30.517659   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 21:24:30.932204   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 21:24:31.463480   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 21:24:31.722691   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 21:24:32.148345   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 21:24:32.419728   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 21:24:32.888310   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 21:24:33.310514   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 21:24:33.722890   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 21:24:34.160769   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 21:24:34.566573   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 21:24:34.768184   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 21:24:35.131567   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 21:24:35.430415   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 21:24:35.809734   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 21:24:36.130364   21944 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:23:49.216697   18936 out.go:340] unable to execute * 2021-03-10 21:22:24.675624 W | etcdserver: request "header:<ID:10490704452246983705 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1196781dffca5418>" with result "size:41" took too long (146.2571ms) to execute
	: html/template:* 2021-03-10 21:22:24.675624 W | etcdserver: request "header:<ID:10490704452246983705 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1196781dffca5418>" with result "size:41" took too long (146.2571ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:23:49.275736   18936 out.go:340] unable to execute * 2021-03-10 21:22:59.599933 W | etcdserver: request "header:<ID:10490704452246983831 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20210310205017-6496.166b1816c78d6540\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20210310205017-6496.166b1816c78d6540\" value_size:826 lease:1267332415392208021 >> failure:<>>" with result "size:16" took too long (103.3495ms) to execute
	: html/template:* 2021-03-10 21:22:59.599933 W | etcdserver: request "header:<ID:10490704452246983831 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20210310205017-6496.166b1816c78d6540\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20210310205017-6496.166b1816c78d6540\" value_size:826 lease:1267332415392208021 >> failure:<>>" with result "size:16" took too long (103.3495ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:23:56.021880   18936 out.go:340] unable to execute * 2021-03-10 21:11:27.857788 W | etcdserver: request "header:<ID:10490704451955658173 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.97\" mod_revision:712 > success:<request_put:<key:\"/registry/masterleases/192.168.49.97\" value_size:68 lease:1267332415100882363 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.97\" > >>" with result "size:16" took too long (170.5214ms) to execute
	: html/template:* 2021-03-10 21:11:27.857788 W | etcdserver: request "header:<ID:10490704451955658173 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.97\" mod_revision:712 > success:<request_put:<key:\"/registry/masterleases/192.168.49.97\" value_size:68 lease:1267332415100882363 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.97\" > >>" with result "size:16" took too long (170.5214ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:23:56.037955   18936 out.go:340] unable to execute * 2021-03-10 21:11:35.537503 W | etcdserver: request "header:<ID:10490704451955658213 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/busybox\" mod_revision:669 > success:<request_put:<key:\"/registry/pods/default/busybox\" value_size:2115 >> failure:<request_range:<key:\"/registry/pods/default/busybox\" > >>" with result "size:16" took too long (175.0758ms) to execute
	: html/template:* 2021-03-10 21:11:35.537503 W | etcdserver: request "header:<ID:10490704451955658213 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/busybox\" mod_revision:669 > success:<request_put:<key:\"/registry/pods/default/busybox\" value_size:2115 >> failure:<request_range:<key:\"/registry/pods/default/busybox\" > >>" with result "size:16" took too long (175.0758ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:23:56.047461   18936 out.go:340] unable to execute * 2021-03-10 21:11:37.419715 W | etcdserver: request "header:<ID:10490704451955658224 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1196781dee6d0def>" with result "size:41" took too long (128.4123ms) to execute
	: html/template:* 2021-03-10 21:11:37.419715 W | etcdserver: request "header:<ID:10490704451955658224 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1196781dee6d0def>" with result "size:41" took too long (128.4123ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:24:30.872008   18936 out.go:335] unable to parse "* I0310 21:21:27.389877   21944 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:21:27.389877   21944 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:24:30.878370   18936 out.go:335] unable to parse "* I0310 21:21:28.483849   21944 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.093974s)\n": template: * I0310 21:21:28.483849   21944 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.093974s)
	:1: function "json" not defined - returning raw string.
	E0310 21:24:30.943709   18936 out.go:335] unable to parse "* I0310 21:21:29.663940   21944 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:21:29.663940   21944 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:24:30.956035   18936 out.go:335] unable to parse "* I0310 21:21:30.712365   21944 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0479583s)\n": template: * I0310 21:21:30.712365   21944 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0479583s)
	:1: function "json" not defined - returning raw string.
	E0310 21:24:31.106947   18936 out.go:340] unable to execute * I0310 21:21:31.412702   21944 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210310212126-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:21:31.412702   21944 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210310212126-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:294: executing "* I0310 21:21:31.412702   21944 cli_runner.go:115] Run: docker network inspect enable-default-cni-20210310212126-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:31.117089   18936 out.go:340] unable to execute * W0310 21:21:32.067660   21944 cli_runner.go:162] docker network inspect enable-default-cni-20210310212126-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 21:21:32.067660   21944 cli_runner.go:162] docker network inspect enable-default-cni-20210310212126-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:289: executing "* W0310 21:21:32.067660   21944 cli_runner.go:162] docker network inspect enable-default-cni-20210310212126-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:31.188008   18936 out.go:340] unable to execute * I0310 21:21:32.699405   21944 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:21:32.699405   21944 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 21:21:32.699405   21944 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:31.257280   18936 out.go:335] unable to parse "* I0310 21:21:40.983995   21944 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:21:40.983995   21944 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:24:31.458478   18936 out.go:335] unable to parse "* I0310 21:21:42.016850   21944 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0328561s)\n": template: * I0310 21:21:42.016850   21944 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0328561s)
	:1: function "json" not defined - returning raw string.
	E0310 21:24:31.469917   18936 out.go:335] unable to parse "* I0310 21:21:42.025507   21944 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 21:21:42.025507   21944 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 21:24:31.485349   18936 out.go:335] unable to parse "* I0310 21:21:43.043062   21944 cli_runner.go:168] Completed: docker info --format \"'{{json .SecurityOptions}}'\": (1.017556s)\n": template: * I0310 21:21:43.043062   21944 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.017556s)
	:1: function "json" not defined - returning raw string.
	E0310 21:24:31.875075   18936 out.go:340] unable to execute * I0310 21:21:54.854742   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:21:54.854742   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:21:54.854742   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:31.891654   18936 out.go:335] unable to parse "* I0310 21:21:55.484891   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}\n": template: * I0310 21:21:55.484891   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:24:31.917958   18936 out.go:340] unable to execute * I0310 21:21:59.732328   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:21:59.732328   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:21:59.732328   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:31.931813   18936 out.go:335] unable to parse "* I0310 21:22:00.356349   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}\n": template: * I0310 21:22:00.356349   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:24:32.049291   18936 out.go:340] unable to execute * I0310 21:22:02.634678   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:22:02.634678   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:22:02.634678   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:32.082594   18936 out.go:340] unable to execute * I0310 21:22:04.749524   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:22:04.749524   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:22:04.749524   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:32.100140   18936 out.go:335] unable to parse "* I0310 21:22:05.343894   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}\n": template: * I0310 21:22:05.343894   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:24:32.143365   18936 out.go:340] unable to execute * I0310 21:22:06.198918   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:22:06.198918   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:22:06.198918   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:32.176123   18936 out.go:335] unable to parse "* I0310 21:22:06.782659   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}\n": template: * I0310 21:22:06.782659   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:24:32.666271   18936 out.go:340] unable to execute * I0310 21:22:07.603886   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:22:07.603886   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:22:07.603886   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:32.678261   18936 out.go:335] unable to parse "* I0310 21:22:08.212166   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}\n": template: * I0310 21:22:08.212166   21944 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55208 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:24:33.096106   18936 out.go:340] unable to execute * I0310 21:22:18.478050   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:22:18.478050   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:22:18.478050   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:33.168661   18936 out.go:340] unable to execute * I0310 21:22:20.693865   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:22:20.693865   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:22:20.693865   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:33.595063   18936 out.go:340] unable to execute * I0310 21:22:22.510256   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:22:22.510256   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:22:22.510256   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:33.610693   18936 out.go:340] unable to execute * I0310 21:22:22.518317   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:22:22.518317   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:22:22.518317   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:24:34.253296   18936 out.go:340] unable to execute * I0310 21:22:29.146628   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	: template: * I0310 21:22:29.146628   21944 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-20210310212126-6496
	:1:96: executing "* I0310 21:22:29.146628   21944 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" enable-default-cni-20210310212126-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210310205017-6496 -n embed-certs-20210310205017-6496

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20210310205017-6496 -n embed-certs-20210310205017-6496: (16.6577745s)
helpers_test.go:257: (dbg) Run:  kubectl --context embed-certs-20210310205017-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:257: (dbg) Done: kubectl --context embed-certs-20210310205017-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.0468172s)
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context embed-certs-20210310205017-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context embed-certs-20210310205017-6496 describe pod : exit status 1 (200.0379ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context embed-certs-20210310205017-6496 describe pod : exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (735.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (23.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context no-preload-20210310204947-6496 create -f testdata\busybox.yaml
start_stop_delete_test.go:164: (dbg) Non-zero exit: kubectl --context no-preload-20210310204947-6496 create -f testdata\busybox.yaml: exit status 1 (227.625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20210310204947-6496" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:164: kubectl --context no-preload-20210310204947-6496 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect no-preload-20210310204947-6496
helpers_test.go:231: (dbg) docker inspect no-preload-20210310204947-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966",
	        "Created": "2021-03-10T20:50:09.5134495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226707,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:50:18.2832035Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/hosts",
	        "LogPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966-json.log",
	        "Name": "/no-preload-20210310204947-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210310204947-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210310204947-6496",
	                "Source": "/var/lib/docker/volumes/no-preload-20210310204947-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210310204947-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210310204947-6496",
	                "name.minikube.sigs.k8s.io": "no-preload-20210310204947-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c6d01674efa3eecf9681de23d3865d233efc3221239cb41b2b4e0f3ba80281f5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55142"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55140"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c6d01674efa3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "8ddbefa5a1b53f48449ea00eb7709ab032429b796d5246894e3cd34e9259cc89",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.7",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:07",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "8ddbefa5a1b53f48449ea00eb7709ab032429b796d5246894e3cd34e9259cc89",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.7",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:07",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210310204947-6496 -n no-preload-20210310204947-6496

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210310204947-6496 -n no-preload-20210310204947-6496: exit status 4 (12.6445819s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:15:43.587122   14076 status.go:396] kubeconfig endpoint: extract IP: "no-preload-20210310204947-6496" does not appear in C:\Users\jenkins/.kube/config

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 4 (may be ok)
helpers_test.go:237: "no-preload-20210310204947-6496" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect no-preload-20210310204947-6496
helpers_test.go:231: (dbg) docker inspect no-preload-20210310204947-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966",
	        "Created": "2021-03-10T20:50:09.5134495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226707,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:50:18.2832035Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/hosts",
	        "LogPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966-json.log",
	        "Name": "/no-preload-20210310204947-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210310204947-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210310204947-6496",
	                "Source": "/var/lib/docker/volumes/no-preload-20210310204947-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210310204947-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210310204947-6496",
	                "name.minikube.sigs.k8s.io": "no-preload-20210310204947-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c6d01674efa3eecf9681de23d3865d233efc3221239cb41b2b4e0f3ba80281f5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55142"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55140"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c6d01674efa3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "8ddbefa5a1b53f48449ea00eb7709ab032429b796d5246894e3cd34e9259cc89",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.7",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:07",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "8ddbefa5a1b53f48449ea00eb7709ab032429b796d5246894e3cd34e9259cc89",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.7",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:07",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210310204947-6496 -n no-preload-20210310204947-6496
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210310204947-6496 -n no-preload-20210310204947-6496: exit status 4 (9.2461852s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:15:55.716770   10372 status.go:396] kubeconfig endpoint: extract IP: "no-preload-20210310204947-6496" does not appear in C:\Users\jenkins/.kube/config

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 4 (may be ok)
helpers_test.go:237: "no-preload-20210310204947-6496" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (23.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (1162.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20210310211546-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20210310211546-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (19m21.1300472s)

                                                
                                                
-- stdout --
	* [cilium-20210310211546-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	
	
	* Starting control plane node cilium-20210310211546-6496 in cluster cilium-20210310211546-6496
	* Creating docker container (CPUs=2, Memory=1800MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 21:15:46.625222    7648 out.go:239] Setting OutFile to fd 2740 ...
	I0310 21:15:46.625222    7648 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:15:46.625222    7648 out.go:252] Setting ErrFile to fd 2732...
	I0310 21:15:46.625222    7648 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:15:46.641160    7648 out.go:246] Setting JSON to false
	I0310 21:15:46.650417    7648 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36412,"bootTime":1615374534,"procs":117,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 21:15:46.650830    7648 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 21:15:46.663440    7648 out.go:129] * [cilium-20210310211546-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 21:15:46.666453    7648 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 21:15:46.675691    7648 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 21:15:47.333910    7648 docker.go:119] docker version: linux-20.10.2
	I0310 21:15:47.341103    7648 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:15:48.508885    7648 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1677833s)
	I0310 21:15:48.511431    7648 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:76 SystemTime:2021-03-10 21:15:47.8972391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:15:48.514881    7648 out.go:129] * Using the docker driver based on user configuration
	I0310 21:15:48.515887    7648 start.go:276] selected driver: docker
	I0310 21:15:48.515887    7648 start.go:718] validating driver "docker" against <nil>
	I0310 21:15:48.515887    7648 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 21:15:50.557661    7648 out.go:129] 
	W0310 21:15:50.558286    7648 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	W0310 21:15:50.559140    7648 out.go:191] * Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	* Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0310 21:15:50.559324    7648 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* Documentation: https://docs.docker.com/docker-for-windows/#resources
	I0310 21:15:50.570611    7648 out.go:129] 
	I0310 21:15:50.588602    7648 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:15:51.571530    7648 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:7 ContainersRunning:7 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:true NGoroutines:70 SystemTime:2021-03-10 21:15:51.1257103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:15:51.572771    7648 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 21:15:51.574196    7648 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 21:15:51.574684    7648 cni.go:74] Creating CNI manager for "cilium"
	I0310 21:15:51.574841    7648 start_flags.go:393] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0310 21:15:51.574841    7648 start_flags.go:398] config:
	{Name:cilium-20210310211546-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:cilium-20210310211546-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:15:51.583701    7648 out.go:129] * Starting control plane node cilium-20210310211546-6496 in cluster cilium-20210310211546-6496
	I0310 21:15:52.299533    7648 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 21:15:52.299875    7648 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 21:15:52.299875    7648 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:15:52.300070    7648 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:15:52.300529    7648 cache.go:54] Caching tarball of preloaded images
	I0310 21:15:52.300529    7648 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 21:15:52.300951    7648 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 21:15:52.302036    7648 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\config.json ...
	I0310 21:15:52.302694    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\config.json: {Name:mk7788245825cb488c6ba4c6ca09034556642927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:15:52.319103    7648 cache.go:185] Successfully downloaded all kic artifacts
	I0310 21:15:52.320179    7648 start.go:313] acquiring machines lock for cilium-20210310211546-6496: {Name:mkb6ffd16b01f6e80495af680df1eea89bbdc0a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:15:52.326260    7648 start.go:317] acquired machines lock for "cilium-20210310211546-6496" in 0s
	I0310 21:15:52.326260    7648 start.go:89] Provisioning new machine with config: &{Name:cilium-20210310211546-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:cilium-20210310211546-6496 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 21:15:52.326999    7648 start.go:126] createHost starting for "" (driver="docker")
	I0310 21:15:52.330845    7648 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	I0310 21:15:52.331842    7648 start.go:160] libmachine.API.Create for "cilium-20210310211546-6496" (driver="docker")
	I0310 21:15:52.331842    7648 client.go:168] LocalClient.Create starting
	I0310 21:15:52.332589    7648 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 21:15:52.332589    7648 main.go:121] libmachine: Decoding PEM data...
	I0310 21:15:52.332589    7648 main.go:121] libmachine: Parsing certificate...
	I0310 21:15:52.332589    7648 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 21:15:52.333580    7648 main.go:121] libmachine: Decoding PEM data...
	I0310 21:15:52.333580    7648 main.go:121] libmachine: Parsing certificate...
	I0310 21:15:52.350584    7648 cli_runner.go:115] Run: docker network inspect cilium-20210310211546-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 21:15:52.929701    7648 cli_runner.go:162] docker network inspect cilium-20210310211546-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 21:15:52.939861    7648 network_create.go:240] running [docker network inspect cilium-20210310211546-6496] to gather additional debugging logs...
	I0310 21:15:52.940474    7648 cli_runner.go:115] Run: docker network inspect cilium-20210310211546-6496
	W0310 21:15:53.557645    7648 cli_runner.go:162] docker network inspect cilium-20210310211546-6496 returned with exit code 1
	I0310 21:15:53.565285    7648 network_create.go:243] error running [docker network inspect cilium-20210310211546-6496]: docker network inspect cilium-20210310211546-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20210310211546-6496
	I0310 21:15:53.565285    7648 network_create.go:245] output of [docker network inspect cilium-20210310211546-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20210310211546-6496
	
	** /stderr **
	I0310 21:15:53.573050    7648 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 21:15:54.245070    7648 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 21:15:54.245214    7648 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: cilium-20210310211546-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 21:15:54.247105    7648 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20210310211546-6496
	W0310 21:15:54.844584    7648 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20210310211546-6496 returned with exit code 1
	W0310 21:15:54.845254    7648 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 21:15:54.876036    7648 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 21:15:55.520568    7648 cli_runner.go:115] Run: docker volume create cilium-20210310211546-6496 --label name.minikube.sigs.k8s.io=cilium-20210310211546-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 21:16:00.099950    7648 cli_runner.go:168] Completed: docker volume create cilium-20210310211546-6496 --label name.minikube.sigs.k8s.io=cilium-20210310211546-6496 --label created_by.minikube.sigs.k8s.io=true: (4.5793891s)
	I0310 21:16:00.099950    7648 oci.go:102] Successfully created a docker volume cilium-20210310211546-6496
	I0310 21:16:00.112636    7648 cli_runner.go:115] Run: docker run --rm --name cilium-20210310211546-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210310211546-6496 --entrypoint /usr/bin/test -v cilium-20210310211546-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 21:16:06.277847    7648 cli_runner.go:168] Completed: docker run --rm --name cilium-20210310211546-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210310211546-6496 --entrypoint /usr/bin/test -v cilium-20210310211546-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (6.1650083s)
	I0310 21:16:06.277847    7648 oci.go:106] Successfully prepared a docker volume cilium-20210310211546-6496
	I0310 21:16:06.278103    7648 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:16:06.278713    7648 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:16:06.278713    7648 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 21:16:06.292986    7648 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20210310211546-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	I0310 21:16:06.300744    7648 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	W0310 21:16:07.044719    7648 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20210310211546-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 21:16:07.045656    7648 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20210310211546-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 21:16:07.420839    7648 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1182531s)
	I0310 21:16:07.422136    7648 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:7 ContainersRunning:7 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:70 SystemTime:2021-03-10 21:16:06.8952943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:16:07.432724    7648 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 21:16:08.605487    7648 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1718812s)
	I0310 21:16:08.618106    7648 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20210310211546-6496 --name cilium-20210310211546-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210310211546-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20210310211546-6496 --volume cilium-20210310211546-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 21:16:13.602906    7648 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20210310211546-6496 --name cilium-20210310211546-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210310211546-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20210310211546-6496 --volume cilium-20210310211546-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (4.9843776s)
	I0310 21:16:13.613008    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format={{.State.Running}}
	I0310 21:16:14.352882    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format={{.State.Status}}
	I0310 21:16:15.108202    7648 cli_runner.go:115] Run: docker exec cilium-20210310211546-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 21:16:16.705588    7648 cli_runner.go:168] Completed: docker exec cilium-20210310211546-6496 stat /var/lib/dpkg/alternatives/iptables: (1.5973873s)
	I0310 21:16:16.706035    7648 oci.go:278] the created container "cilium-20210310211546-6496" has a running status.
	I0310 21:16:16.706035    7648 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa...
	I0310 21:16:16.883057    7648 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 21:16:18.356681    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format={{.State.Status}}
	I0310 21:16:18.977554    7648 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 21:16:18.977554    7648 kic_runner.go:115] Args: [docker exec --privileged cilium-20210310211546-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 21:16:20.354531    7648 kic_runner.go:124] Done: [docker exec --privileged cilium-20210310211546-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3769792s)
	I0310 21:16:20.358643    7648 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa...
	I0310 21:16:21.243176    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format={{.State.Status}}
	I0310 21:16:21.819129    7648 machine.go:88] provisioning docker machine ...
	I0310 21:16:21.819563    7648 ubuntu.go:169] provisioning hostname "cilium-20210310211546-6496"
	I0310 21:16:21.829023    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:16:22.497689    7648 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:22.507837    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	I0310 21:16:22.507837    7648 main.go:121] libmachine: About to run SSH command:
	sudo hostname cilium-20210310211546-6496 && echo "cilium-20210310211546-6496" | sudo tee /etc/hostname
	I0310 21:16:22.520170    7648 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:16:27.392048    7648 main.go:121] libmachine: SSH cmd err, output: <nil>: cilium-20210310211546-6496
	
	I0310 21:16:27.402568    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:16:28.136738    7648 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:28.137701    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	I0310 21:16:28.137701    7648 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20210310211546-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20210310211546-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20210310211546-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 21:16:29.102714    7648 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:16:29.103156    7648 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 21:16:29.103156    7648 ubuntu.go:177] setting up certificates
	I0310 21:16:29.103156    7648 provision.go:83] configureAuth start
	I0310 21:16:29.112935    7648 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210310211546-6496
	I0310 21:16:29.770121    7648 provision.go:137] copyHostCerts
	I0310 21:16:29.770742    7648 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 21:16:29.770742    7648 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 21:16:29.771388    7648 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 21:16:29.782108    7648 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 21:16:29.782108    7648 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 21:16:29.782108    7648 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 21:16:29.785119    7648 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 21:16:29.785119    7648 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 21:16:29.786116    7648 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 21:16:29.788106    7648 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.cilium-20210310211546-6496 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20210310211546-6496]
	I0310 21:16:30.023917    7648 provision.go:165] copyRemoteCerts
	I0310 21:16:30.038684    7648 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 21:16:30.045275    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:16:30.691985    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:16:31.431866    7648 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3931844s)
	I0310 21:16:31.432531    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 21:16:31.902562    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0310 21:16:32.483903    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 21:16:32.991193    7648 provision.go:86] duration metric: configureAuth took 3.8876411s
	I0310 21:16:33.000139    7648 ubuntu.go:193] setting minikube options for container-runtime
	I0310 21:16:33.014798    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:16:33.697895    7648 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:33.698382    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	I0310 21:16:33.698956    7648 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 21:16:34.769570    7648 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 21:16:34.769736    7648 ubuntu.go:71] root file system type: overlay
	I0310 21:16:34.770328    7648 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 21:16:34.790353    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:16:35.527457    7648 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:35.529070    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	I0310 21:16:35.529271    7648 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 21:16:36.707506    7648 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 21:16:36.721035    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:16:37.369940    7648 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:37.370362    7648 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55188 <nil> <nil>}
	I0310 21:16:37.370581    7648 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 21:16:51.503309    7648 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 21:16:36.682285000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 21:16:51.503472    7648 machine.go:91] provisioned docker machine in 29.6843846s
	I0310 21:16:51.503472    7648 client.go:171] LocalClient.Create took 59.1717123s
	I0310 21:16:51.504124    7648 start.go:168] duration metric: libmachine.API.Create for "cilium-20210310211546-6496" took 59.1722108s
	I0310 21:16:51.504124    7648 start.go:267] post-start starting for "cilium-20210310211546-6496" (driver="docker")
	I0310 21:16:51.504265    7648 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 21:16:51.514957    7648 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 21:16:51.522127    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:16:52.109677    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:16:52.451452    7648 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 21:16:52.484699    7648 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 21:16:52.485654    7648 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 21:16:52.485654    7648 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 21:16:52.485654    7648 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 21:16:52.485654    7648 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 21:16:52.486314    7648 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 21:16:52.489038    7648 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 21:16:52.491146    7648 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 21:16:52.512974    7648 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 21:16:52.564122    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 21:16:52.803426    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 21:16:53.190385    7648 start.go:270] post-start completed in 1.6862636s
	I0310 21:16:53.231217    7648 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210310211546-6496
	I0310 21:16:53.818281    7648 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\config.json ...
	I0310 21:16:53.873454    7648 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 21:16:53.881015    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:16:54.497217    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:16:54.811004    7648 start.go:129] duration metric: createHost completed in 1m2.484093s
	I0310 21:16:54.811004    7648 start.go:80] releasing machines lock for "cilium-20210310211546-6496", held for 1m2.4848319s
	I0310 21:16:54.818877    7648 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210310211546-6496
	I0310 21:16:55.424282    7648 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 21:16:55.436603    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:16:55.436603    7648 ssh_runner.go:149] Run: systemctl --version
	I0310 21:16:55.442682    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:16:56.084335    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:16:56.135045    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:16:56.724050    7648 ssh_runner.go:189] Completed: systemctl --version: (1.2874485s)
	I0310 21:16:56.729337    7648 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3050567s)
	I0310 21:16:56.748709    7648 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 21:16:56.876780    7648 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:16:56.978673    7648 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 21:16:56.991056    7648 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 21:16:57.052842    7648 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 21:16:57.253661    7648 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:16:57.380378    7648 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:16:58.478430    7648 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0980537s)
	I0310 21:16:58.492202    7648 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 21:16:58.608270    7648 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 21:16:59.466303    7648 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 21:16:59.476087    7648 cli_runner.go:115] Run: docker exec -t cilium-20210310211546-6496 dig +short host.docker.internal
	I0310 21:17:00.570611    7648 cli_runner.go:168] Completed: docker exec -t cilium-20210310211546-6496 dig +short host.docker.internal: (1.0945251s)
	I0310 21:17:00.570755    7648 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 21:17:00.579680    7648 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 21:17:00.614093    7648 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:17:00.771471    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:17:01.438931    7648 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\client.crt
	I0310 21:17:01.449160    7648 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\client.key
	I0310 21:17:01.452217    7648 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:17:01.452217    7648 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:17:01.460425    7648 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:17:02.054299    7648 docker.go:423] Got preloaded images: 
	I0310 21:17:02.054511    7648 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 21:17:02.057485    7648 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 21:17:02.146370    7648 ssh_runner.go:149] Run: which lz4
	I0310 21:17:02.252478    7648 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 21:17:02.339889    7648 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 21:17:02.340316    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 21:18:44.676163    7648 docker.go:388] Took 102.428911 seconds to copy over tarball
	I0310 21:18:44.686842    7648 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 21:19:35.104840    7648 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (50.4180654s)
	I0310 21:19:35.104840    7648 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 21:19:36.735853    7648 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 21:19:36.794093    7648 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 21:19:37.058988    7648 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:19:38.142432    7648 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0834458s)
	I0310 21:19:38.153657    7648 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 21:19:44.903376    7648 ssh_runner.go:189] Completed: sudo systemctl restart docker: (6.7497275s)
	I0310 21:19:44.913905    7648 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:19:45.911762    7648 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:19:45.911965    7648 cache_images.go:73] Images are preloaded, skipping loading
	I0310 21:19:45.919309    7648 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 21:19:47.858497    7648 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (1.9391903s)
	I0310 21:19:47.858497    7648 cni.go:74] Creating CNI manager for "cilium"
	I0310 21:19:47.858497    7648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 21:19:47.858497    7648 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20210310211546-6496 NodeName:cilium-20210310211546-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube
/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 21:19:47.858497    7648 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cilium-20210310211546-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 21:19:47.858497    7648 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cilium-20210310211546-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=172.17.0.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:cilium-20210310211546-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0310 21:19:47.877206    7648 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 21:19:48.018730    7648 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 21:19:48.031463    7648 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 21:19:48.143059    7648 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0310 21:19:48.404411    7648 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 21:19:48.698866    7648 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1852 bytes)
	I0310 21:19:49.196109    7648 ssh_runner.go:149] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
	I0310 21:19:49.244902    7648 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:19:49.470157    7648 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496 for IP: 172.17.0.2
	I0310 21:19:49.471061    7648 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 21:19:49.471453    7648 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 21:19:49.472272    7648 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\client.key
	I0310 21:19:49.472489    7648 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key.7b749c5f
	I0310 21:19:49.472489    7648 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 21:19:49.919364    7648 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt.7b749c5f ...
	I0310 21:19:49.919690    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt.7b749c5f: {Name:mk97460ca42861b0d6a09ba14b19b51fe8ab3377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:19:49.939818    7648 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key.7b749c5f ...
	I0310 21:19:49.940908    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key.7b749c5f: {Name:mk94cc14d9783b08a03c5b374844574766387a4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:19:49.956255    7648 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt.7b749c5f -> C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt
	I0310 21:19:49.961134    7648 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key.7b749c5f -> C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key
	I0310 21:19:49.964364    7648 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.key
	I0310 21:19:49.964364    7648 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.crt with IP's: []
	I0310 21:19:50.187942    7648 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.crt ...
	I0310 21:19:50.187942    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.crt: {Name:mkb3448c74da2f7fe3a703692cc6730ccf6431f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:19:50.203759    7648 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.key ...
	I0310 21:19:50.203759    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.key: {Name:mkf4cd66eceb549aaf0c768a7cee083da4c92e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:19:50.226890    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 21:19:50.227541    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.227747    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 21:19:50.227747    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.227747    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 21:19:50.227747    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.227747    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 21:19:50.229146    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.229621    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 21:19:50.230246    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.230545    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 21:19:50.231328    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.231911    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 21:19:50.232799    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.233388    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 21:19:50.233966    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.235163    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 21:19:50.235163    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.235900    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 21:19:50.235900    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.235900    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 21:19:50.236997    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.236997    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 21:19:50.236997    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.237758    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 21:19:50.237758    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.237758    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 21:19:50.238350    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.238626    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 21:19:50.238626    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.238626    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 21:19:50.239487    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.239487    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 21:19:50.239487    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.239487    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 21:19:50.240474    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.240474    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 21:19:50.240474    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.240474    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 21:19:50.241390    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.241390    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 21:19:50.241390    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.241390    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 21:19:50.242247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.242247    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 21:19:50.242247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.242247    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 21:19:50.242247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.243245    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 21:19:50.243245    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.243245    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 21:19:50.248272    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.248272    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 21:19:50.249247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.249247    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 21:19:50.249247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.249247    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 21:19:50.249247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.250254    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 21:19:50.250254    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.250254    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 21:19:50.250254    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.250254    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 21:19:50.251230    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.251230    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 21:19:50.251230    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.251230    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 21:19:50.251230    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.251230    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 21:19:50.252246    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.252246    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 21:19:50.252246    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.252246    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 21:19:50.252246    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.253272    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 21:19:50.253272    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.253272    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 21:19:50.253272    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:50.253272    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 21:19:50.254262    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 21:19:50.254262    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 21:19:50.254262    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 21:19:50.261236    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 21:19:50.455721    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 21:19:50.822265    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 21:19:51.263035    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 21:19:51.492590    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 21:19:51.866394    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 21:19:52.120932    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 21:19:52.398978    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 21:19:52.665642    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 21:19:52.883350    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 21:19:53.332245    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 21:19:53.565653    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 21:19:53.899134    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 21:19:54.193431    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 21:19:54.648821    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 21:19:55.076049    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 21:19:55.446863    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 21:19:55.847422    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 21:19:56.224809    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 21:19:56.725130    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 21:19:56.988343    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 21:19:57.179157    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 21:19:57.821432    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 21:19:58.012653    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 21:19:58.446641    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 21:19:58.619909    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 21:19:59.040186    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 21:19:59.381072    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 21:19:59.674841    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 21:19:59.993784    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 21:20:00.181038    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 21:20:00.423311    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 21:20:00.549863    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 21:20:00.753493    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 21:20:00.979257    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 21:20:01.198595    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 21:20:01.397227    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 21:20:01.561969    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 21:20:01.880499    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 21:20:02.105521    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 21:20:02.368274    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 21:20:02.573574    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 21:20:02.860938    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 21:20:03.187229    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 21:20:03.486889    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 21:20:03.753838    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 21:20:03.942909    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 21:20:04.282794    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 21:20:04.490657    7648 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 21:20:04.666798    7648 ssh_runner.go:149] Run: openssl version
	I0310 21:20:04.721768    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 21:20:04.791917    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 21:20:04.864431    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 21:20:04.874642    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 21:20:04.941496    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:05.022355    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 21:20:05.097657    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 21:20:05.134383    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 21:20:05.148888    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 21:20:05.202235    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:05.279386    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 21:20:05.385810    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 21:20:05.425481    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 21:20:05.438789    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 21:20:05.511414    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:05.571127    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 21:20:05.683987    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 21:20:05.719269    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 21:20:05.732146    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 21:20:05.778833    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:05.853468    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 21:20:05.955399    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 21:20:06.025691    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 21:20:06.048563    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 21:20:06.092612    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:06.178977    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 21:20:06.348671    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 21:20:06.388722    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 21:20:06.407608    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 21:20:06.455960    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:06.539330    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 21:20:06.616668    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 21:20:06.679707    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 21:20:06.689708    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 21:20:06.732635    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:06.807564    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 21:20:06.914267    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 21:20:06.944082    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 21:20:06.960447    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 21:20:07.040502    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:07.122274    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 21:20:07.248099    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 21:20:07.290740    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 21:20:07.301914    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 21:20:07.389647    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:07.452924    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 21:20:07.509124    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 21:20:07.559566    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 21:20:07.574957    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 21:20:07.650923    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:07.730693    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 21:20:07.788137    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 21:20:07.827537    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 21:20:07.837863    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 21:20:07.954514    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:08.055834    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 21:20:08.131202    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 21:20:08.165624    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 21:20:08.181243    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 21:20:08.257288    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:08.338349    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 21:20:08.428444    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:20:08.503079    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:20:08.518351    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:20:08.604605    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 21:20:08.715698    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 21:20:08.817071    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 21:20:08.868403    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 21:20:08.884564    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 21:20:08.932484    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:10.180220    7648 ssh_runner.go:189] Completed: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0": (1.2477367s)
	I0310 21:20:10.190473    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 21:20:10.268771    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 21:20:10.338151    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 21:20:10.355278    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 21:20:10.425971    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:10.497019    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 21:20:10.653124    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 21:20:10.686481    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 21:20:10.708671    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 21:20:10.880254    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:10.963456    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 21:20:11.068423    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 21:20:11.145427    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 21:20:11.174316    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 21:20:11.270079    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:11.418355    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 21:20:11.552920    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 21:20:11.604091    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 21:20:11.613004    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 21:20:11.711564    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:11.818975    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 21:20:11.953596    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 21:20:11.991904    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 21:20:12.011000    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 21:20:12.079047    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:12.139988    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 21:20:12.219518    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 21:20:12.257643    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 21:20:12.267694    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 21:20:12.360618    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:12.461836    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 21:20:12.708194    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 21:20:12.747527    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 21:20:12.761975    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 21:20:12.823969    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:12.930268    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 21:20:13.117178    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 21:20:13.164716    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 21:20:13.174821    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 21:20:13.211768    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:13.359363    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 21:20:13.510525    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 21:20:13.581956    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 21:20:13.591660    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 21:20:13.699954    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:13.785770    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 21:20:13.888111    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 21:20:13.943345    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 21:20:13.962144    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 21:20:14.027804    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:14.122373    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 21:20:14.216297    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 21:20:14.255824    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 21:20:14.274349    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 21:20:14.318274    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:14.478501    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 21:20:14.560514    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 21:20:14.586514    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 21:20:14.600017    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 21:20:14.743531    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:14.839899    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 21:20:14.993797    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 21:20:15.040030    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 21:20:15.058609    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 21:20:15.117345    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:15.243423    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 21:20:15.390045    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 21:20:15.417232    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 21:20:15.426720    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 21:20:15.554709    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:15.644683    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 21:20:15.699004    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 21:20:15.740189    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 21:20:15.745989    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 21:20:15.795869    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:15.907408    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 21:20:15.983438    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 21:20:16.056122    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 21:20:16.072079    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 21:20:16.155123    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:16.246747    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 21:20:16.360992    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 21:20:16.406468    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 21:20:16.418886    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 21:20:16.505780    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:16.622554    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 21:20:16.828342    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 21:20:16.859486    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 21:20:16.876564    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 21:20:16.996731    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:17.114042    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 21:20:17.203151    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 21:20:17.252121    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 21:20:17.265954    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 21:20:17.327616    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:17.466508    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 21:20:17.581704    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 21:20:17.613416    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 21:20:17.632278    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 21:20:17.693059    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:17.817068    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 21:20:18.061187    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 21:20:18.174132    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 21:20:18.183529    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 21:20:18.282180    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:18.455720    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 21:20:18.659785    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 21:20:18.732247    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 21:20:18.750371    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 21:20:18.889892    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:19.125387    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 21:20:19.269771    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 21:20:19.415084    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 21:20:19.440323    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 21:20:19.564493    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:19.782285    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 21:20:20.112355    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 21:20:20.160820    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 21:20:20.168813    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 21:20:20.240026    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:20.314488    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 21:20:20.491246    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 21:20:20.597173    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 21:20:20.607684    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 21:20:20.731636    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:20.895769    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 21:20:21.041819    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 21:20:21.128679    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 21:20:21.148815    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 21:20:21.288807    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:21.456738    7648 kubeadm.go:385] StartCluster: {Name:cilium-20210310211546-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:cilium-20210310211546-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:20:21.467362    7648 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:20:23.099741    7648 ssh_runner.go:189] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (1.6323818s)
	I0310 21:20:23.112013    7648 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 21:20:23.389322    7648 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 21:20:23.572039    7648 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:20:23.584120    7648 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:20:23.801736    7648 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:20:23.801736    7648 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:25:58.900155    7648 out.go:150]   - Generating certificates and keys ...
	I0310 21:25:58.922169    7648 out.go:150]   - Booting up control plane ...
	I0310 21:25:58.929431    7648 out.go:150]   - Configuring RBAC rules ...
	I0310 21:25:59.029333    7648 cni.go:74] Creating CNI manager for "cilium"
	I0310 21:25:59.041317    7648 out.go:129] * Configuring Cilium (Container Networking Interface) ...
	I0310 21:25:59.055985    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0310 21:26:00.880834    7648 ssh_runner.go:189] Completed: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf": (1.8246791s)
	I0310 21:26:00.885066    7648 cni.go:160] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
	I0310 21:26:00.885066    7648 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (18465 bytes)
	I0310 21:26:02.702431    7648 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0310 21:27:36.396289    7648 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1m33.694322s)
	I0310 21:27:36.396957    7648 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0310 21:27:36.421234    7648 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=cilium-20210310211546-6496 minikube.k8s.io/updated_at=2021_03_10T21_27_36_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:27:36.431808    7648 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:27:42.966276    7648 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (6.5690387s)
	I0310 21:27:42.966497    7648 ops.go:34] apiserver oom_adj: -16
	I0310 21:28:04.567682    7648 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=cilium-20210310211546-6496 minikube.k8s.io/updated_at=2021_03_10T21_27_36_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (28.146565s)
	I0310 21:28:07.415270    7648 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (30.9835906s)
	I0310 21:28:07.430452    7648 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:28:18.635504    7648 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (11.2050955s)
	I0310 21:28:18.635904    7648 kubeadm.go:995] duration metric: took 42.2391198s to wait for elevateKubeSystemPrivileges.
	I0310 21:28:18.636346    7648 kubeadm.go:387] StartCluster complete in 7m57.1812805s
	I0310 21:28:18.636552    7648 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:28:18.637173    7648 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	I0310 21:28:18.644567    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:28:19.791142    7648 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "cilium-20210310211546-6496" rescaled to 1
	I0310 21:28:19.791435    7648 start.go:203] Will wait 5m0s for node up to 
	I0310 21:28:19.791584    7648 addons.go:381] enableAddons start: toEnable=map[], additional=[]
	I0310 21:28:19.793131    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:28:19.793131    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:28:19.793643    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:28:19.809178    7648 out.go:129] * Verifying Kubernetes components...
	I0310 21:28:19.812851    7648 addons.go:58] Setting default-storageclass=true in profile "cilium-20210310211546-6496"
	I0310 21:28:19.812851    7648 addons.go:58] Setting storage-provisioner=true in profile "cilium-20210310211546-6496"
	I0310 21:28:19.813288    7648 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "cilium-20210310211546-6496"
	I0310 21:28:19.813288    7648 addons.go:134] Setting addon storage-provisioner=true in "cilium-20210310211546-6496"
	W0310 21:28:19.813631    7648 addons.go:143] addon storage-provisioner should already be in state true
	I0310 21:28:19.814146    7648 host.go:66] Checking if "cilium-20210310211546-6496" exists ...
	I0310 21:28:20.208892    7648 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 21:28:20.497723    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format={{.State.Status}}
	I0310 21:28:20.497723    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format={{.State.Status}}
	I0310 21:28:20.863475    7648 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:20.863475    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	I0310 21:28:20.864479    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.0533736s
	I0310 21:28:20.864479    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	I0310 21:28:20.891987    7648 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:20.892546    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	I0310 21:28:20.892546    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.081573s
	I0310 21:28:20.892546    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	I0310 21:28:20.973828    7648 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:20.974892    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	I0310 21:28:20.974892    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.1644215s
	I0310 21:28:20.974892    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	I0310 21:28:21.026485    7648 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.027420    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	I0310 21:28:21.028326    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.2129546s
	I0310 21:28:21.028326    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	I0310 21:28:21.050049    7648 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.051052    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	I0310 21:28:21.051052    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.2382055s
	I0310 21:28:21.051052    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	I0310 21:28:21.105056    7648 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.105056    7648 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.105056    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	I0310 21:28:21.106535    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.2952268s
	I0310 21:28:21.106535    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	I0310 21:28:21.107045    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	I0310 21:28:21.107532    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 1.2913925s
	I0310 21:28:21.107532    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	I0310 21:28:21.175747    7648 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.176816    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	I0310 21:28:21.176816    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.3619951s
	I0310 21:28:21.177705    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	I0310 21:28:21.198918    7648 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.199496    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	I0310 21:28:21.199946    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.382817s
	I0310 21:28:21.200139    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	I0310 21:28:21.239322    7648 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.239965    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	I0310 21:28:21.240351    7648 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.240732    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	I0310 21:28:21.241718    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.4135546s
	I0310 21:28:21.241718    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	I0310 21:28:21.241718    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.4149947s
	I0310 21:28:21.245744    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	I0310 21:28:21.245744    7648 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.245744    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	I0310 21:28:21.248697    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.4333263s
	I0310 21:28:21.248697    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	I0310 21:28:21.264962    7648 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.266121    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	I0310 21:28:21.266461    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.4355182s
	I0310 21:28:21.266608    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	I0310 21:28:21.295668    7648 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.303984    7648 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.304752    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	I0310 21:28:21.305040    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	I0310 21:28:21.305040    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 1.4739441s
	I0310 21:28:21.305040    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	I0310 21:28:21.305451    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.4702551s
	I0310 21:28:21.305451    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	I0310 21:28:21.317376    7648 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.317376    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	I0310 21:28:21.318355    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.504457s
	I0310 21:28:21.318355    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	I0310 21:28:21.324181    7648 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.325950    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	I0310 21:28:21.326255    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.4956528s
	I0310 21:28:21.326255    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	I0310 21:28:21.349504    7648 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.349638    7648 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.350030    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	I0310 21:28:21.350030    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 1.5157306s
	I0310 21:28:21.350457    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	I0310 21:28:21.350592    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	I0310 21:28:21.350592    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.5238696s
	I0310 21:28:21.350592    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	I0310 21:28:21.372271    7648 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.372980    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	I0310 21:28:21.373359    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.5430907s
	I0310 21:28:21.373359    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	I0310 21:28:21.424965    7648 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.425569    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	I0310 21:28:21.425569    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.596691s
	I0310 21:28:21.426211    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	I0310 21:28:21.443796    7648 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.444310    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	I0310 21:28:21.444577    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 1.631732s
	I0310 21:28:21.444788    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	I0310 21:28:21.455540    7648 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.456286    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	I0310 21:28:21.456817    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.625409s
	I0310 21:28:21.456817    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	I0310 21:28:21.460711    7648 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.460923    7648 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.461366    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	I0310 21:28:21.461580    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	I0310 21:28:21.461794    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.626242s
	I0310 21:28:21.462024    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	I0310 21:28:21.462691    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.6333687s
	I0310 21:28:21.462691    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	I0310 21:28:21.462911    7648 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.463336    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	I0310 21:28:21.463550    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.6277182s
	I0310 21:28:21.463550    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	I0310 21:28:21.494775    7648 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.494993    7648 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.495482    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	I0310 21:28:21.495699    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	I0310 21:28:21.496087    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.6693645s
	I0310 21:28:21.496290    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	I0310 21:28:21.496087    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.6640083s
	I0310 21:28:21.496290    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	I0310 21:28:21.511835    7648 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.513325    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	I0310 21:28:21.513325    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 1.700044s
	I0310 21:28:21.513750    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	I0310 21:28:21.515745    7648 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.516322    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	I0310 21:28:21.516623    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.6994954s
	I0310 21:28:21.516989    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	I0310 21:28:21.519352    7648 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.519809    7648 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.519809    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	I0310 21:28:21.520419    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	I0310 21:28:21.520419    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.6852243s
	I0310 21:28:21.520648    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	I0310 21:28:21.520648    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 1.6848169s
	I0310 21:28:21.521508    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	I0310 21:28:21.523222    7648 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.524080    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	I0310 21:28:21.524834    7648 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:28:21.525056    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.7109168s
	I0310 21:28:21.526382    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	I0310 21:28:21.525442    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	I0310 21:28:21.526973    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.6922581s
	I0310 21:28:21.527211    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	I0310 21:28:21.527211    7648 cache.go:73] Successfully saved all images to host disk.
	I0310 21:28:21.551952    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format={{.State.Status}}
	I0310 21:28:21.966115    7648 cli_runner.go:168] Completed: docker container inspect cilium-20210310211546-6496 --format={{.State.Status}}: (1.4683977s)
	I0310 21:28:22.001084    7648 cli_runner.go:168] Completed: docker container inspect cilium-20210310211546-6496 --format={{.State.Status}}: (1.5031826s)
	I0310 21:28:22.018720    7648 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 21:28:22.028284    7648 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 21:28:22.028284    7648 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0310 21:28:22.049708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:28:22.258533    7648 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:28:22.272536    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:28:22.535241    7648 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (2.3263576s)
	I0310 21:28:22.551013    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:28:22.654920    7648 addons.go:134] Setting addon default-storageclass=true in "cilium-20210310211546-6496"
	W0310 21:28:22.655355    7648 addons.go:143] addon default-storageclass should already be in state true
	I0310 21:28:22.655908    7648 host.go:66] Checking if "cilium-20210310211546-6496" exists ...
	I0310 21:28:22.678815    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format={{.State.Status}}
	I0310 21:28:22.824948    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:28:22.968631    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:28:23.331897    7648 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	I0310 21:28:23.331897    7648 pod_ready.go:59] waiting 5m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	I0310 21:28:23.368194    7648 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	I0310 21:28:23.368194    7648 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0310 21:28:23.378006    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:28:24.055838    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:28:25.431107    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:28.137134    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:29.259370    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:30.619561    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:31.734666    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:32.648538    7648 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 21:28:33.160365    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:33.538386    7648 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0310 21:28:34.282900    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:35.643829    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:37.140021    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:40.061841    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000b1b8d0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:41.221772    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f52160}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:42.304185    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0010a1610}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:44.142855    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00130f3b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:45.313438    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001597bf0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:46.388229    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00171e510}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:47.743692    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000e906c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:49.279854    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000597ab0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:51.877458    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000ef4560}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:53.350826    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0011a1f00}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:54.711655    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001349c30}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:56.114794    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012b3e70}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:57.304422    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0015fbb80}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:28:58.373379    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001767090}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:00.355329    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000e809b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:01.625639    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012e0ff0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:03.112935    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d90b00}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:04.324221    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0011ac2c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:06.235285    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f58e10}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:07.175951    7648 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (44.9175801s)
	I0310 21:29:07.176476    7648 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:29:07.176476    7648 docker.go:429] minikube-local-cache-test:functional-20210105233232-2512 wasn't preloaded
	I0310 21:29:07.176476    7648 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210128021318-232 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210303214129-4588 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210304184021-4052 minikube-local-cache-test:functional-20210106002159-6856 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210119220838-6552 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:functional
-20210304002630-1156 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210308233820-5396 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210114204234-6692 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210126212539-5172 minikube-local-cache-test:functional-20210107190945-8748 minikube-local-cache-test:functional-20210213143925-7440 minikube-local-cache-test:functional-20210310191609-6496 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210225231842-5736]
	I0310 21:29:07.247655    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396
	I0310 21:29:07.255269    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120022529-1140
	I0310 21:29:07.290948    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588
	I0310 21:29:07.299085    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432
	I0310 21:29:07.301088    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520
	I0310 21:29:07.311489    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464
	I0310 21:29:07.318482    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056
	I0310 21:29:07.339838    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	I0310 21:29:07.339838    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	I0310 21:29:07.381821    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552
	I0310 21:29:07.414796    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	I0310 21:29:07.438906    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	I0310 21:29:07.443742    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	I0310 21:29:07.477911    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692
	I0310 21:29:07.481914    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920
	I0310 21:29:07.482920    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156
	I0310 21:29:07.508226    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352
	I0310 21:29:07.520813    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	I0310 21:29:07.529517    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120214442-10992
	I0310 21:29:07.538874    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440
	I0310 21:29:07.564257    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516
	I0310 21:29:07.564257    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	I0310 21:29:07.582641    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	I0310 21:29:07.610659    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052
	I0310 21:29:07.620307    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310191609-6496
	I0310 21:29:07.620307    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210224014800-800
	W0310 21:29:07.625730    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:29:07.638172    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	I0310 21:29:07.663188    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210301195830-5700
	W0310 21:29:07.679587    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:29:07.697751    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024
	I0310 21:29:07.726814    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452
	I0310 21:29:07.754744    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	I0310 21:29:07.760100    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	I0310 21:29:07.762688    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210123004019-5372
	I0310 21:29:07.762688    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310083645-5040
	I0310 21:29:07.773934    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736
	I0310 21:29:07.808531    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	I0310 21:29:07.818075    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	I0310 21:29:07.819365    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0016a0860}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:07.828903    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172
	W0310 21:29:07.832926    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:29:07.851899    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	I0310 21:29:07.862363    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944
	I0310 21:29:07.873411    7648 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210128021318-232
	W0310 21:29:07.925009    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:29:07.957446    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:29:07.957656    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	I0310 21:29:07.957656    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:29:07.957656    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:29:07.973596    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:29:07.973596    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	I0310 21:29:07.973596    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:29:07.973596    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:29:07.977472    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:29:07.985449    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	W0310 21:29:07.993431    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:29:08.011621    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:29:08.011621    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:29:08.012422    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	I0310 21:29:08.012422    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:29:08.012422    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	W0310 21:29:08.019510    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:29:08.021502    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:29:08.102733    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:29:08.102733    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	I0310 21:29:08.102733    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:29:08.102733    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:29:08.111748    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:29:08.111748    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	I0310 21:29:08.111748    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:29:08.111748    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:29:08.122034    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:29:08.122034    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:29:08.148352    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:29:08.148652    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	I0310 21:29:08.149109    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:29:08.149109    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:29:08.165622    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:29:08.165622    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	I0310 21:29:08.165622    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:29:08.165622    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:29:08.169242    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:29:08.176244    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:29:09.567317    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d2f450}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:29:10.051982    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:10.051982    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: "minikube-local-cache-test:functional-20210224014800-800" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:10.051982    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:29:10.051982    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	W0310 21:29:10.051982    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:29:10.051982    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:10.060493    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: "minikube-local-cache-test:functional-20210304002630-1156" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:10.060493    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: "minikube-local-cache-test:functional-20210304184021-4052" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:29:10.060493    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:10.060493    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:29:10.060493    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:29:10.060493    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:29:10.060690    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	W0310 21:29:10.060690    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:10.060690    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: "minikube-local-cache-test:functional-20210115191024-3516" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:29:10.060493    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:29:10.051982    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:10.060690    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:29:10.061176    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:29:10.060493    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: "minikube-local-cache-test:functional-20210120214442-10992" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:10.061176    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:29:10.061632    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:29:10.060690    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: "minikube-local-cache-test:functional-20210213143925-7440" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:29:10.060493    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:10.062091    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: "minikube-local-cache-test:functional-20210212145109-352" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:10.060690    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: "minikube-local-cache-test:functional-20210310191609-6496" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:10.062091    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:29:10.062091    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:29:10.062370    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:29:10.062091    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:29:10.062370    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:29:10.062370    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:29:10.179063    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:29:10.193864    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 21:29:10.196851    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:10.197950    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:29:10.198469    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:29:10.212681    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:10.216143    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:10.222593    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:10.231927    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 21:29:10.237399    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:29:10.244314    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:29:10.245969    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:29:10.271257    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:10.272296    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:10.274661    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:10.279212    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:10.852741    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018aaeb0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:11.106054    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:11.111331    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:11.132864    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:11.225802    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:11.237782    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:11.238475    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.0253617s)
	I0310 21:29:11.238475    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:11.240376    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.0177872s)
	I0310 21:29:11.240671    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:11.262814    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:29:12.219037    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:29:12.219037    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.219037    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.219037    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: "minikube-local-cache-test:functional-20210225231842-5736" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:29:12.219037    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.219037    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:29:12.219037    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:29:12.219037    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.219037    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	I0310 21:29:12.219037    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	W0310 21:29:12.219037    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.219303    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: "minikube-local-cache-test:functional-20210128021318-232" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:12.219303    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:29:12.219303    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	W0310 21:29:12.219303    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.219303    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.219523    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	I0310 21:29:12.219037    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.221131    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	I0310 21:29:12.219037    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: "minikube-local-cache-test:functional-20210310083645-5040" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:29:12.219037    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:29:12.221131    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.221357    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: NewSession: ssh: rejected: connect failed (open failed)
	W0310 21:29:12.221357    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.221357    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	I0310 21:29:12.221357    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: "minikube-local-cache-test:functional-20210309234032-4944" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:12.221357    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:29:12.221357    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.221679    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: "minikube-local-cache-test:functional-20210301195830-5700" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:12.221679    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:29:12.221679    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.221679    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	W0310 21:29:12.221131    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.222371    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:29:12.222647    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	I0310 21:29:12.221679    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:29:12.221679    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:29:12.223634    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:29:12.223982    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: "minikube-local-cache-test:functional-20210120231122-7024" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:12.224075    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:29:12.224335    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:29:12.225305    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: "minikube-local-cache-test:functional-20210123004019-5372" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:12.225305    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:29:12.225305    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:29:12.225305    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: "minikube-local-cache-test:functional-20210220004129-7452" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:12.225305    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:29:12.225305    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:29:12.225768    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: "minikube-local-cache-test:functional-20210126212539-5172" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:29:12.225768    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:29:12.226128    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:29:12.391924    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.416122    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.441889    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:29:12.442983    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 21:29:12.450184    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.469003    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.470012    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.472269    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:29:12.473442    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:29:12.478917    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.482708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.482708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.482708    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:29:12.489532    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.494515    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.494515    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.512965    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:29:12.523392    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.531282    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:29:12.532240    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:29:12.536332    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:29:12.586607    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.593200    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.601819    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:12.614808    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:13.364764    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00025f1e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:13.664797    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1858841s)
	I0310 21:29:13.665215    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.676365    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2073662s)
	I0310 21:29:13.676365    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.692939    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2423118s)
	I0310 21:29:13.693435    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.706955    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.224251s)
	I0310 21:29:13.706955    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.710366    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2208376s)
	I0310 21:29:13.710877    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.752581    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3364631s)
	I0310 21:29:13.752581    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.783528    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.26014s)
	I0310 21:29:13.783928    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.798563    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2053673s)
	I0310 21:29:13.798830    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3522633s)
	I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3644439s)
	I0310 21:29:13.847147    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.847147    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4552282s)
	I0310 21:29:13.848379    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.895821    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4258137s)
	I0310 21:29:13.895821    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.909373    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2945693s)
	I0310 21:29:13.910097    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.920958    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3343558s)
	I0310 21:29:13.921784    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:13.960114    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4652305s)
	I0310 21:29:13.961080    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:14.057432    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4556187s)
	I0310 21:29:14.057820    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:16.078502    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00123c780}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:17.175614    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001785970}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:17.177938    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:29:17.178108    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:29:17.178108    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:29:17.180645    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:29:17.180645    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:29:17.180645    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:29:17.180645    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:29:17.180645    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:29:17.180645    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:29:17.180645    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:29:17.180645    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:29:17.180645    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:29:17.180645    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:29:17.180645    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:29:17.180645    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:29:17.180645    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:29:17.180645    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:29:17.180645    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:29:17.182625    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:29:17.182625    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:29:17.182625    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:29:17.182625    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:29:17.182625    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:29:17.182625    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:29:17.183941    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:29:17.183941    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:29:17.183941    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:29:17.189694    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:29:17.189694    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:29:17.189694    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:29:17.218621    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:29:17.263615    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:29:17.277604    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:29:17.278608    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:29:17.282614    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:29:17.303499    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:29:17.304677    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:17.310571    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:29:17.310571    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:29:17.310571    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:29:17.311498    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:29:17.312495    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:17.353686    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:17.357459    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:17.363637    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:17.398564    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:17.400613    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:17.401567    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:17.402557    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:17.405567    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:18.456760    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1520876s)
	I0310 21:29:18.457243    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:18.457920    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1454284s)
	I0310 21:29:18.458224    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:18.566607    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1640537s)
	I0310 21:29:18.567024    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:18.568617    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.168008s)
	I0310 21:29:18.568869    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:18.599483    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2009233s)
	I0310 21:29:18.599855    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:18.620002    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2661034s)
	I0310 21:29:18.620155    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:18.635235    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2336719s)
	I0310 21:29:18.635519    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:18.647351    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2835677s)
	I0310 21:29:18.647351    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:18.662304    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2567412s)
	I0310 21:29:18.662304    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:18.663296    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3054878s)
	I0310 21:29:18.663296    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:19.438143    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001089920}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:20.840856    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00035d150}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:21.946471    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001bf5600}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:23.121919    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d908e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:24.870724    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00102d0e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:30.561732    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0015489a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:33.099490    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a015b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:35.502381    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000cf92e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:36.972266    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000e918a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:29:37.205118    7648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:29:38.767505    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00108fc40}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:40.730446    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00120b910}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:44.014555    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012f3520}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:29:46.165305    7648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:29:46.165305    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:29:46.165742    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700 (4096 bytes)
	I0310 21:29:46.174879    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00143f1f0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:46.176196    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:29:46.857641    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:29:51.592157    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d2fab0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:29:54.672584    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00169afc0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:03.667626    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017dcbf0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:05.460012    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a208c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:08.153865    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017e0540}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:10.409991    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a0f100}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:12.620730    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d91290}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:30:20.773783    7648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:30:20.773783    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:30:20.774112    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588 (4096 bytes)
	I0310 21:30:20.787243    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:30:21.032714    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00102ddd0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:21.462566    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:30:22.325816    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012e0cc0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:23.653112    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00112d630}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:25.839169    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000e80200}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:27.159158    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000597ac0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:28.617023    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a20c80}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:29.880204    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d49ae0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:35.558573    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0014c57b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:36.574312    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017dc1a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:30:37.958337    7648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:30:37.959521    7648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:30:37.959521    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:30:37.959709    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:30:37.959709    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520 (4096 bytes)
	I0310 21:30:37.959709    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396 (4096 bytes)
	I0310 21:30:37.968721    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:30:37.970571    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:30:38.109030    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d90650}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:38.618865    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:30:38.690273    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:30:39.355560    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00112c930}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:40.480378    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0007d06d0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:42.217113    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00035cfa0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:30:43.549380    7648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:30:43.549380    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:30:43.549656    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692 (4096 bytes)
	I0310 21:30:43.564477    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	W0310 21:30:44.077044    7648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:30:44.077656    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:30:44.078096    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464 (4096 bytes)
	I0310 21:30:44.087303    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:30:44.163232    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:30:44.314728    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00145b0d0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:44.761256    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:30:46.092593    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018fd220}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:47.100304    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001c05030}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:48.146126    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012e14a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:30:48.280012    7648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:30:48.280012    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:30:48.280012    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056 (4096 bytes)
	W0310 21:30:49.067830    7648 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:30:49.544636    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0007d0b30}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:50.882124    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00035d2e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:53.058396    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0006fd910}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:54.278643    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001501470}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:55.708385    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00192f890}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:57.695727    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018abf80}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:30:59.046519    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001548530}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:00.214652    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0015981f0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:01.517232    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001b24600}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:02.732230    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017fb540}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:04.253849    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001597860}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:05.580103    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017fe540}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:06.630820    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0016a0e30}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:07.643095    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017e0600}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:10.266363    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000b06e30}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:11.871316    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001784ba0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:13.341190    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012b77d0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:14.616648    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00138f2b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:15.714035    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001813390}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:17.004756    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00163e810}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:18.195478    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0016b0b90}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:19.572881    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001598b20}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:20.763224    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0003fd6e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:21.963148    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00163d480}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:23.251553    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0007d0590}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:24.640396    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0016ad740}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:25.916127    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001cff550}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:28.129882    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001c2c620}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:29.250842    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00163e550}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:30.305219    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0016b0640}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:32.262822    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a00dc0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:34.096544    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001529a50}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:35.589197    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a0f000}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:37.303801    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001c4bb20}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:38.601620    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a2a520}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:39.670833    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001816580}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:41.153659    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001314720}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:43.652807    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000b06cd0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:45.348442    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00163d7c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:46.749485    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00187a480}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:48.103660    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0019bd1e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:49.139471    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001e24d80}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:50.550563    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00187c860}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:51.644233    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001c66d00}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:51.696778    7648 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:31:51.708190    7648 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:31:53.094205    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a007e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:54.286671    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0019e9200}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:55.679964    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00035d590}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:56.875809    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012b62d0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:58.128361    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001c4b180}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:01.054308    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001480a50}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:02.366779    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a2a420}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:04.277939    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001de2750}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:07.400576    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001816790}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:08.731678    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0014d4790}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:10.233322    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0019811c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:11.315182    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001528ca0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:12.766454    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001557310}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:14.097402    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017bd980}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:15.152160    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0016c6ef0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:16.963265    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001db2d90}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:19.625691    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001f7aaa0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:20.632698    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001f105b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:21.744190    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00187db50}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:23.107869    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d905a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:24.272235    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00138e6c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:24.766367    7648 pod_ready.go:62] duration metric: took 4m1.4351788s to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	I0310 21:32:24.766367    7648 pod_ready.go:59] waiting 5m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	I0310 21:32:25.557738    7648 pod_ready.go:97] pod "etcd-cilium-20210310211546-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:45 +0000 GMT Reason: Message:}
	I0310 21:32:25.557889    7648 pod_ready.go:62] duration metric: took 791.5238ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	I0310 21:32:25.557889    7648 pod_ready.go:59] waiting 5m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	I0310 21:32:25.919262    7648 pod_ready.go:97] pod "kube-apiserver-cilium-20210310211546-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:05 +0000 GMT Reason: Message:}
	I0310 21:32:25.919470    7648 pod_ready.go:62] duration metric: took 361.5816ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	I0310 21:32:25.919671    7648 pod_ready.go:59] waiting 5m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	I0310 21:32:27.577318    7648 pod_ready.go:97] pod "kube-controller-manager-cilium-20210310211546-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:14 +0000 GMT Reason: Message:}
	I0310 21:32:27.577463    7648 pod_ready.go:62] duration metric: took 1.6577952s to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	I0310 21:32:27.577463    7648 pod_ready.go:59] waiting 5m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	I0310 21:32:28.860669    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001557cf9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:31.869964    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00112cae9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:32.900627    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001e76cc9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:34.138659    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc002123b49}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:36.015352    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001609619}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:37.551786    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001a2b559}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:38.563347    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c2de29}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:39.969816    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc000b06d19}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:41.783620    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0015297d9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:44.229822    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0006fdd59}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:45.311692    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00187b6a9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:47.467088    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00112c229}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:48.684491    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00203cc99}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:49.734333    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001e7aa39}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:51.027046    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00163e339}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:52.668831    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc000d91ef9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:56.592519    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc000597139}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:57.975732    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001816279}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:32:59.000255    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc000e91ef9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:00.423930    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0016a09f9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:01.523391    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0007d0039}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:02.676359    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001e608d9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:03.816507    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0003fc5d9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:05.427597    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00187c3a9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:08.472496    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0017648d9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:09.951587    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0019e9d39}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:10.951967    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc000e808c9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:12.527412    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0012b6f09}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:13.938077    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0017bcec9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:15.444891    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001e3e4f9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:16.979818    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001480459}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:18.408388    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c24bb9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:19.543912    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc000d91d29}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:20.972255    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0015999a9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:22.467968    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0006fd129}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:23.555765    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00163c619}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:24.908620    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0016a1669}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:25.317942    7648 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4m51.7803583s)
	I0310 21:33:25.317942    7648 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588: (4m18.0273972s)
	W0310 21:33:25.317942    7648 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0310 21:33:25.318337    7648 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396: (4m18.0713651s)
	I0310 21:33:25.318337    7648 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120022529-1140: (4m18.0637515s)
	I0310 21:33:25.318337    7648 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464: (4m18.0075307s)
	I0310 21:33:25.318773    7648 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432: (4m18.0189341s)
	I0310 21:33:25.318773    7648 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520: (4m18.0183672s)
	I0310 21:33:25.318773    7648 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056: (4m18.0009736s)
	W0310 21:33:25.318773    7648 out.go:191] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	]
	I0310 21:33:25.318773    7648 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552: (4m17.9376341s)
	I0310 21:33:25.319314    7648 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920: (4m17.8380814s)
	I0310 21:33:25.319314    7648 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692: (4m17.8420847s)
	I0310 21:33:25.319314    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: (4m15.0880595s)
	I0310 21:33:25.319771    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352': No such file or directory
	I0310 21:33:25.319771    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (4m12.8785462s)
	I0310 21:33:25.319771    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736': No such file or directory
	I0310 21:33:25.319771    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: (4m15.1413798s)
	I0310 21:33:25.319771    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352 (4096 bytes)
	I0310 21:33:25.319771    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	I0310 21:33:25.319771    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516': No such file or directory
	I0310 21:33:25.319771    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: (4m15.0744736s)
	I0310 21:33:25.321861    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992': No such file or directory
	I0310 21:33:25.322134    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: (4m12.809478s)
	I0310 21:33:25.322316    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372': No such file or directory
	I0310 21:33:25.322316    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (4m15.1291251s)
	I0310 21:33:25.322595    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	I0310 21:33:25.322595    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800': No such file or directory
	I0310 21:33:25.322595    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372 (4096 bytes)
	I0310 21:33:25.322595    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800 (4096 bytes)
	I0310 21:33:25.319771    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: (4m15.1219737s)
	I0310 21:33:25.322134    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992 (4096 bytes)
	I0310 21:33:25.322316    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (4m15.1250382s)
	I0310 21:33:25.322316    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: (4m15.0830439s)
	I0310 21:33:25.322884    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (4m15.0792426s)
	I0310 21:33:25.323607    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440': No such file or directory
	I0310 21:33:25.323978    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440 (4096 bytes)
	I0310 21:33:25.323117    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (4m12.7811906s)
	I0310 21:33:25.324262    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172': No such file or directory
	I0310 21:33:25.324585    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156': No such file or directory
	I0310 21:33:25.324750    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496': No such file or directory
	I0310 21:33:25.324585    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052': No such file or directory
	I0310 21:33:25.323274    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: (4m12.851512s)
	I0310 21:33:25.323274    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (4m8.0124243s)
	I0310 21:33:25.323274    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: (4m12.8412309s)
	I0310 21:33:25.323274    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: (4m8.045315s)
	I0310 21:33:25.324750    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	I0310 21:33:25.324750    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	I0310 21:33:25.326255    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552': No such file or directory
	I0310 21:33:25.326255    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	I0310 21:33:25.326255    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552 (4096 bytes)
	I0310 21:33:25.326255    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944': No such file or directory
	I0310 21:33:25.326255    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052 (4096 bytes)
	I0310 21:33:25.326255    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920': No such file or directory
	I0310 21:33:25.326255    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452': No such file or directory
	I0310 21:33:25.326255    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	I0310 21:33:25.326255    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944 (4096 bytes)
	I0310 21:33:25.326255    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	I0310 21:33:25.355638    7648 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4m52.7079072s)
	I0310 21:33:25.358968    7648 out.go:129] * Enabled addons: storage-provisioner
	I0310 21:33:25.358968    7648 addons.go:383] enableAddons completed in 5m5.5682388s
	W0310 21:33:25.365491    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:33:25.366276    7648 retry.go:31] will retry after 276.165072ms: ssh: rejected: connect failed (open failed)
	W0310 21:33:25.366276    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:33:25.366947    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:33:25.366947    7648 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	W0310 21:33:25.366947    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:33:25.366947    7648 retry.go:31] will retry after 291.140013ms: ssh: rejected: connect failed (open failed)
	I0310 21:33:25.366947    7648 retry.go:31] will retry after 234.428547ms: ssh: rejected: connect failed (open failed)
	W0310 21:33:25.366947    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:33:25.366947    7648 retry.go:31] will retry after 231.159374ms: ssh: rejected: connect failed (open failed)
	I0310 21:33:25.608251    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:33:25.610447    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:33:25.656358    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:33:25.667019    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:33:25.747990    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	I0310 21:33:26.093376    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001609429}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:26.440864    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:33:26.458877    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:33:26.466607    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:33:26.501432    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:33:26.523883    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	I0310 21:33:27.487906    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c66c79}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:29.081013    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c34819}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:30.335405    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00079b3b9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:31.431822    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001e760b9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:32.946289    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c2a749}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:34.518680    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001be0129}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:36.074057    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0014d5419}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:37.408773    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00035cbe9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:38.499196    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001089a89}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:40.137182    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0019e89a9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:41.181568    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001f11a69}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:42.415710    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c2dc09}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:43.428457    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c25d79}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:44.550254    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0017e14a9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:46.677408    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc000b06ba9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:47.718490    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00187bef9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:49.027996    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00123c6d9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:50.353625    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001de3c89}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:52.703529    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c2dca9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:53.969121    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0015981f9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:55.270942    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0006fdea9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:56.610940    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0016296c9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:58.039586    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0012b6059}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:33:59.075712    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001de29e9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:00.458918    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001981eb9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:01.997037    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c35cd9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:03.413453    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001536bf9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:04.958760    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001cbc169}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:06.272839    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00035d9a9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:07.627332    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0007d0029}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:09.480651    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001a00259}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:10.556634    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0018ab4e9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:11.976019    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001de2da9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:13.062420    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc000e806a9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:15.078971    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001b25c39}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:16.537102    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc000b06779}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:17.970860    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00163d389}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:19.314354    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001a00bd9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:20.588298    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001764e19}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:21.792775    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00102c4a9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:22.943931    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001b24599}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:24.422747    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001088cf9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:25.493683    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001d24959}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:26.568480    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0015363e9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:28.122883    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc000e902d9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:29.684685    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0007d04c9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:30.922153    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001609269}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:32.041700    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00079ac49}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:33.456048    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0014d5de9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:34.460265    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001816329}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:35.464525    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001549be9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:37.034302    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00163f1a9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:38.453874    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001a2ab89}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:39.457605    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00123d299}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:40.979131    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001980d09}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:42.634705    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0016e4a99}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:43.930202    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: (5m31.3987834s)
	I0310 21:34:43.930202    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: (5m31.4880412s)
	I0310 21:34:43.930202    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: (5m31.4575821s)
	I0310 21:34:43.930202    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: (5m26.6663998s)
	I0310 21:34:43.930202    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140': No such file or directory
	I0310 21:34:43.930202    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024': No such file or directory
	I0310 21:34:43.930202    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040': No such file or directory
	I0310 21:34:43.930202    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	I0310 21:34:43.930202    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232': No such file or directory
	I0310 21:34:43.930202    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040 (4096 bytes)
	I0310 21:34:43.930202    7648 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: (5m26.6204369s)
	I0310 21:34:43.930202    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432': No such file or directory
	I0310 21:34:43.930202    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024 (4096 bytes)
	I0310 21:34:43.930202    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	I0310 21:34:43.930202    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	I0310 21:34:43.950366    7648 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: (2m52.2425428s)
	I0310 21:34:43.950366    7648 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 from cache
	I0310 21:34:43.950596    7648 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:34:43.961613    7648 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:34:44.008906    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001598929}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:45.042028    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00203d3d9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:46.469991    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0019e88b9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:47.828799    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c2cb39}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:48.992473    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001de2ac9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:50.438727    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001529979}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:51.449308    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00187a209}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:52.658495    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001599eb9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:53.902685    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00035cea9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:55.027839    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0016a1fc9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:56.457841    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc0006fdf89}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:34:59.273788    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001d81bc9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:35:01.006439    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001549e99}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:35:02.428664    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001c2a2a9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:35:03.500313    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00035dc29}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:35:05.107774    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc00079a119}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:35:06.664286    7648 pod_ready.go:102] pod "kube-proxy-wnw8z" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:54 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [kube-proxy]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:20 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP:172.17.0.2 PodIPs:[{IP:172.17.0.2}] StartTime:2021-03-10 21:27:54 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Nam
e:kube-proxy State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/kube-proxy:v1.20.2 ImageID: ContainerID: Started:0xc001be0bf9}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0310 21:35:07.168927    7648 pod_ready.go:97] pod "kube-proxy-wnw8z" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:35:05 +0000 GMT Reason: Message:}
	I0310 21:35:07.170183    7648 pod_ready.go:62] duration metric: took 2m39.5928285s to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	I0310 21:35:07.170498    7648 pod_ready.go:59] waiting 5m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	I0310 21:35:07.299602    7648 pod_ready.go:97] pod "kube-scheduler-cilium-20210310211546-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:14 +0000 GMT Reason: Message:}
	I0310 21:35:07.299869    7648 pod_ready.go:62] duration metric: took 129.3711ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	I0310 21:35:07.299869    7648 pod_ready.go:39] duration metric: took 6m43.9690151s for extra waiting for kube-system core pods to be Ready ...
	I0310 21:35:07.303224    7648 out.go:129] 
	W0310 21:35:07.303600    7648 out.go:191] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: "kube-dns": "wait pod Ready: timed out waiting for the condition"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: "kube-dns": "wait pod Ready: timed out waiting for the condition"
	W0310 21:35:07.303885    7648 out.go:191] * 
	* 
	W0310 21:35:07.304024    7648 out.go:191] * If the above advice does not help, please let us know: 
	* If the above advice does not help, please let us know: 
	W0310 21:35:07.304188    7648 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:35:07.306616    7648 out.go:129] 

                                                
                                                
** /stderr **
net_test.go:82: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (1162.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (1153.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20210310211603-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20210310211603-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (19m12.7053252s)

                                                
                                                
-- stdout --
	* [calico-20210310211603-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	
	
	* Starting control plane node calico-20210310211603-6496 in cluster calico-20210310211603-6496
	* Creating docker container (CPUs=2, Memory=1800MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 21:16:04.094174   16712 out.go:239] Setting OutFile to fd 3016 ...
	I0310 21:16:04.095160   16712 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:16:04.096166   16712 out.go:252] Setting ErrFile to fd 2412...
	I0310 21:16:04.096166   16712 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:16:04.121385   16712 out.go:246] Setting JSON to false
	I0310 21:16:04.130396   16712 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36430,"bootTime":1615374534,"procs":120,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 21:16:04.130396   16712 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 21:16:04.137430   16712 out.go:129] * [calico-20210310211603-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 21:16:04.140390   16712 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 21:16:04.144396   16712 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 21:16:04.747669   16712 docker.go:119] docker version: linux-20.10.2
	I0310 21:16:04.749368   16712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:16:05.798294   16712 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0489272s)
	I0310 21:16:05.799370   16712 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:7 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:93 SystemTime:2021-03-10 21:16:05.3171545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:16:05.803731   16712 out.go:129] * Using the docker driver based on user configuration
	I0310 21:16:05.803986   16712 start.go:276] selected driver: docker
	I0310 21:16:05.803986   16712 start.go:718] validating driver "docker" against <nil>
	I0310 21:16:05.803986   16712 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 21:16:06.904878   16712 out.go:129] 
	W0310 21:16:06.905436   16712 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	W0310 21:16:06.914424   16712 out.go:191] * Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	* Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0310 21:16:06.915168   16712 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* Documentation: https://docs.docker.com/docker-for-windows/#resources
	I0310 21:16:06.917963   16712 out.go:129] 
	I0310 21:16:06.931988   16712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:16:08.067048   16712 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1347721s)
	I0310 21:16:08.067330   16712 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:7 ContainersRunning:7 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:70 SystemTime:2021-03-10 21:16:07.5796165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:16:08.068488   16712 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 21:16:08.069159   16712 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 21:16:08.069394   16712 cni.go:74] Creating CNI manager for "calico"
	I0310 21:16:08.069394   16712 start_flags.go:393] Found "Calico" CNI - setting NetworkPlugin=cni
	I0310 21:16:08.069631   16712 start_flags.go:398] config:
	{Name:calico-20210310211603-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:calico-20210310211603-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:16:08.083543   16712 out.go:129] * Starting control plane node calico-20210310211603-6496 in cluster calico-20210310211603-6496
	I0310 21:16:08.812264   16712 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 21:16:08.812452   16712 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 21:16:08.812969   16712 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:16:08.813426   16712 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:16:08.813426   16712 cache.go:54] Caching tarball of preloaded images
	I0310 21:16:08.813733   16712 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 21:16:08.813733   16712 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 21:16:08.814441   16712 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\config.json ...
	I0310 21:16:08.821390   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\config.json: {Name:mk1c8688c88a19465c9b0008d3a56d112c3e6ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:16:08.852886   16712 cache.go:185] Successfully downloaded all kic artifacts
	I0310 21:16:08.852886   16712 start.go:313] acquiring machines lock for calico-20210310211603-6496: {Name:mk2346628300a1712deed80d8b7784c1fe0ad049 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:08.852886   16712 start.go:317] acquired machines lock for "calico-20210310211603-6496" in 0s
	I0310 21:16:08.853943   16712 start.go:89] Provisioning new machine with config: &{Name:calico-20210310211603-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:calico-20210310211603-6496 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 21:16:08.853943   16712 start.go:126] createHost starting for "" (driver="docker")
	I0310 21:16:08.863881   16712 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	I0310 21:16:08.864874   16712 start.go:160] libmachine.API.Create for "calico-20210310211603-6496" (driver="docker")
	I0310 21:16:08.865899   16712 client.go:168] LocalClient.Create starting
	I0310 21:16:08.865899   16712 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 21:16:08.865899   16712 main.go:121] libmachine: Decoding PEM data...
	I0310 21:16:08.865899   16712 main.go:121] libmachine: Parsing certificate...
	I0310 21:16:08.866892   16712 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 21:16:08.866892   16712 main.go:121] libmachine: Decoding PEM data...
	I0310 21:16:08.866892   16712 main.go:121] libmachine: Parsing certificate...
	I0310 21:16:08.909622   16712 cli_runner.go:115] Run: docker network inspect calico-20210310211603-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 21:16:09.559483   16712 cli_runner.go:162] docker network inspect calico-20210310211603-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 21:16:09.570769   16712 network_create.go:240] running [docker network inspect calico-20210310211603-6496] to gather additional debugging logs...
	I0310 21:16:09.570769   16712 cli_runner.go:115] Run: docker network inspect calico-20210310211603-6496
	W0310 21:16:10.240710   16712 cli_runner.go:162] docker network inspect calico-20210310211603-6496 returned with exit code 1
	I0310 21:16:10.241008   16712 network_create.go:243] error running [docker network inspect calico-20210310211603-6496]: docker network inspect calico-20210310211603-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20210310211603-6496
	I0310 21:16:10.241008   16712 network_create.go:245] output of [docker network inspect calico-20210310211603-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20210310211603-6496
	
	** /stderr **
	I0310 21:16:10.250189   16712 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 21:16:10.932483   16712 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 21:16:10.932483   16712 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: calico-20210310211603-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 21:16:10.939478   16712 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210310211603-6496
	W0310 21:16:11.539763   16712 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210310211603-6496 returned with exit code 1
	W0310 21:16:11.540218   16712 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 21:16:11.557810   16712 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 21:16:12.257623   16712 cli_runner.go:115] Run: docker volume create calico-20210310211603-6496 --label name.minikube.sigs.k8s.io=calico-20210310211603-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 21:16:12.879927   16712 oci.go:102] Successfully created a docker volume calico-20210310211603-6496
	I0310 21:16:12.911970   16712 cli_runner.go:115] Run: docker run --rm --name calico-20210310211603-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210310211603-6496 --entrypoint /usr/bin/test -v calico-20210310211603-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 21:16:19.959974   16712 cli_runner.go:168] Completed: docker run --rm --name calico-20210310211603-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210310211603-6496 --entrypoint /usr/bin/test -v calico-20210310211603-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (7.0480138s)
	I0310 21:16:19.959974   16712 oci.go:106] Successfully prepared a docker volume calico-20210310211603-6496
	I0310 21:16:19.960783   16712 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:16:19.961186   16712 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:16:19.961186   16712 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 21:16:19.970137   16712 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:16:19.980035   16712 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210310211603-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	W0310 21:16:20.704882   16712 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210310211603-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 21:16:20.705518   16712 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210310211603-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 21:16:21.042925   16712 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0727901s)
	I0310 21:16:21.043469   16712 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:true NGoroutines:76 SystemTime:2021-03-10 21:16:20.5374248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:16:21.057402   16712 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 21:16:22.176916   16712 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1195151s)
	I0310 21:16:22.186589   16712 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210310211603-6496 --name calico-20210310211603-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210310211603-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210310211603-6496 --volume calico-20210310211603-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 21:16:25.809741   16712 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210310211603-6496 --name calico-20210310211603-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210310211603-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210310211603-6496 --volume calico-20210310211603-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (3.622817s)
	I0310 21:16:25.822389   16712 cli_runner.go:115] Run: docker container inspect calico-20210310211603-6496 --format={{.State.Running}}
	I0310 21:16:26.436613   16712 cli_runner.go:115] Run: docker container inspect calico-20210310211603-6496 --format={{.State.Status}}
	I0310 21:16:27.142529   16712 cli_runner.go:115] Run: docker exec calico-20210310211603-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 21:16:28.343195   16712 cli_runner.go:168] Completed: docker exec calico-20210310211603-6496 stat /var/lib/dpkg/alternatives/iptables: (1.2006668s)
	I0310 21:16:28.343723   16712 oci.go:278] the created container "calico-20210310211603-6496" has a running status.
	I0310 21:16:28.343723   16712 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa...
	I0310 21:16:28.668015   16712 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 21:16:30.236426   16712 cli_runner.go:115] Run: docker container inspect calico-20210310211603-6496 --format={{.State.Status}}
	I0310 21:16:30.972494   16712 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 21:16:30.972494   16712 kic_runner.go:115] Args: [docker exec --privileged calico-20210310211603-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 21:16:32.216437   16712 kic_runner.go:124] Done: [docker exec --privileged calico-20210310211603-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.2439446s)
	I0310 21:16:32.219046   16712 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa...
	I0310 21:16:33.038846   16712 cli_runner.go:115] Run: docker container inspect calico-20210310211603-6496 --format={{.State.Status}}
	I0310 21:16:33.812041   16712 machine.go:88] provisioning docker machine ...
	I0310 21:16:33.812277   16712 ubuntu.go:169] provisioning hostname "calico-20210310211603-6496"
	I0310 21:16:33.833066   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:16:34.524427   16712 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:34.536228   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	I0310 21:16:34.536228   16712 main.go:121] libmachine: About to run SSH command:
	sudo hostname calico-20210310211603-6496 && echo "calico-20210310211603-6496" | sudo tee /etc/hostname
	I0310 21:16:34.556797   16712 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:16:40.511063   16712 main.go:121] libmachine: SSH cmd err, output: <nil>: calico-20210310211603-6496
	
	I0310 21:16:40.519859   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:16:41.141051   16712 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:41.141051   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	I0310 21:16:41.141051   16712 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210310211603-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210310211603-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210310211603-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 21:16:42.127463   16712 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:16:42.127463   16712 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 21:16:42.127463   16712 ubuntu.go:177] setting up certificates
	I0310 21:16:42.127463   16712 provision.go:83] configureAuth start
	I0310 21:16:42.136030   16712 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210310211603-6496
	I0310 21:16:42.743751   16712 provision.go:137] copyHostCerts
	I0310 21:16:42.744833   16712 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 21:16:42.744833   16712 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 21:16:42.745207   16712 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 21:16:42.752395   16712 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 21:16:42.752395   16712 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 21:16:42.752699   16712 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 21:16:42.756073   16712 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 21:16:42.756073   16712 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 21:16:42.756927   16712 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 21:16:42.759673   16712 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.calico-20210310211603-6496 san=[172.17.0.6 127.0.0.1 localhost 127.0.0.1 minikube calico-20210310211603-6496]
	I0310 21:16:42.916138   16712 provision.go:165] copyRemoteCerts
	I0310 21:16:42.926125   16712 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 21:16:42.932919   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:16:43.596240   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:16:44.332193   16712 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4060694s)
	I0310 21:16:44.332848   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 21:16:44.832885   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0310 21:16:45.133396   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 21:16:45.671985   16712 provision.go:86] duration metric: configureAuth took 3.5445277s
	I0310 21:16:45.671985   16712 ubuntu.go:193] setting minikube options for container-runtime
	I0310 21:16:45.688401   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:16:46.317701   16712 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:46.318821   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	I0310 21:16:46.318821   16712 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 21:16:46.982541   16712 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 21:16:46.982541   16712 ubuntu.go:71] root file system type: overlay
	I0310 21:16:46.982541   16712 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 21:16:46.993797   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:16:47.671142   16712 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:47.671142   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	I0310 21:16:47.671142   16712 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 21:16:48.530303   16712 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 21:16:48.550988   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:16:49.187765   16712 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:49.188999   16712 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55193 <nil> <nil>}
	I0310 21:16:49.188999   16712 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 21:17:01.667854   16712 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 21:16:48.520209000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 21:17:01.668091   16712 machine.go:91] provisioned docker machine in 27.8560883s
	I0310 21:17:01.668091   16712 client.go:171] LocalClient.Create took 52.8022654s
	I0310 21:17:01.668091   16712 start.go:168] duration metric: libmachine.API.Create for "calico-20210310211603-6496" took 52.8032909s
	I0310 21:17:01.668091   16712 start.go:267] post-start starting for "calico-20210310211603-6496" (driver="docker")
	I0310 21:17:01.668091   16712 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 21:17:01.679776   16712 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 21:17:01.690857   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:17:02.232622   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:17:02.753758   16712 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0739835s)
	I0310 21:17:02.754532   16712 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 21:17:02.788501   16712 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 21:17:02.788501   16712 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 21:17:02.788501   16712 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 21:17:02.788501   16712 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 21:17:02.788851   16712 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 21:17:02.789197   16712 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 21:17:02.791914   16712 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 21:17:02.792401   16712 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 21:17:02.803372   16712 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 21:17:02.873691   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 21:17:03.415526   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 21:17:03.973904   16712 start.go:270] post-start completed in 2.305817s
	I0310 21:17:04.015466   16712 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210310211603-6496
	I0310 21:17:04.676467   16712 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\config.json ...
	I0310 21:17:04.725987   16712 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 21:17:04.739995   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:17:05.329901   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:17:06.080331   16712 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3543456s)
	I0310 21:17:06.080331   16712 start.go:129] duration metric: createHost completed in 57.2264679s
	I0310 21:17:06.080331   16712 start.go:80] releasing machines lock for "calico-20210310211603-6496", held for 57.2264679s
	I0310 21:17:06.091269   16712 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210310211603-6496
	I0310 21:17:06.727530   16712 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 21:17:06.734852   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:17:06.736848   16712 ssh_runner.go:149] Run: systemctl --version
	I0310 21:17:06.745255   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:17:07.393700   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:17:07.426097   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:17:08.540119   16712 ssh_runner.go:189] Completed: systemctl --version: (1.8032739s)
	I0310 21:17:08.553867   16712 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 21:17:08.993572   16712 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.265904s)
	I0310 21:17:08.995950   16712 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:17:09.219729   16712 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 21:17:09.233360   16712 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 21:17:09.412079   16712 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 21:17:10.256600   16712 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:17:10.489479   16712 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:17:12.525124   16712 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (2.0356478s)
	I0310 21:17:12.539525   16712 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 21:17:12.738987   16712 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 21:17:13.875573   16712 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (1.136587s)
	I0310 21:17:13.880869   16712 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 21:17:13.890523   16712 cli_runner.go:115] Run: docker exec -t calico-20210310211603-6496 dig +short host.docker.internal
	I0310 21:17:15.188631   16712 cli_runner.go:168] Completed: docker exec -t calico-20210310211603-6496 dig +short host.docker.internal: (1.2979058s)
	I0310 21:17:15.188631   16712 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 21:17:15.206029   16712 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 21:17:15.257967   16712 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:17:15.419609   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:17:16.036438   16712 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\client.crt
	I0310 21:17:16.055119   16712 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\client.key
	I0310 21:17:16.058703   16712 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:17:16.059117   16712 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:17:16.070563   16712 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:17:16.606686   16712 docker.go:423] Got preloaded images: 
	I0310 21:17:16.606930   16712 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 21:17:16.619991   16712 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 21:17:16.729235   16712 ssh_runner.go:149] Run: which lz4
	I0310 21:17:16.822044   16712 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 21:17:16.915979   16712 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 21:17:16.915979   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 21:19:00.699227   16712 docker.go:388] Took 103.893579 seconds to copy over tarball
	I0310 21:19:00.711747   16712 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 21:19:42.043920   16712 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (41.3322272s)
	I0310 21:19:42.043920   16712 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 21:19:43.937424   16712 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 21:19:43.993603   16712 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 21:19:44.143940   16712 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:19:45.194715   16712 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0507764s)
	I0310 21:19:45.207579   16712 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 21:19:51.117553   16712 ssh_runner.go:189] Completed: sudo systemctl restart docker: (5.9092396s)
	I0310 21:19:51.128643   16712 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:19:52.330660   16712 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.2018301s)
	I0310 21:19:52.330660   16712 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:19:52.330660   16712 cache_images.go:73] Images are preloaded, skipping loading
	I0310 21:19:52.337877   16712 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 21:19:54.368837   16712 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (2.0309624s)
	I0310 21:19:54.369562   16712 cni.go:74] Creating CNI manager for "calico"
	I0310 21:19:54.369855   16712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 21:19:54.369855   16712 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.6 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210310211603-6496 NodeName:calico-20210310211603-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube
/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 21:19:54.369855   16712 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20210310211603-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 21:19:54.370719   16712 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20210310211603-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=172.17.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:calico-20210310211603-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0310 21:19:54.379253   16712 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 21:19:54.436531   16712 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 21:19:54.445942   16712 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 21:19:54.518935   16712 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0310 21:19:54.706292   16712 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 21:19:54.958047   16712 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1852 bytes)
	I0310 21:19:55.289581   16712 ssh_runner.go:149] Run: grep 172.17.0.6	control-plane.minikube.internal$ /etc/hosts
	I0310 21:19:55.327265   16712 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:19:55.495711   16712 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496 for IP: 172.17.0.6
	I0310 21:19:55.496214   16712 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 21:19:55.496509   16712 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 21:19:55.497355   16712 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\client.key
	I0310 21:19:55.497508   16712 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key.76cb2290
	I0310 21:19:55.497694   16712 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt.76cb2290 with IP's: [172.17.0.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 21:19:55.792262   16712 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt.76cb2290 ...
	I0310 21:19:55.792262   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt.76cb2290: {Name:mke5a5e76e2d0405f71701af873c36a9bc85f9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:19:55.820149   16712 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key.76cb2290 ...
	I0310 21:19:55.820604   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key.76cb2290: {Name:mkd60630b72c66392c81449169044de0e81f328e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:19:55.837189   16712 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt.76cb2290 -> C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt
	I0310 21:19:55.852663   16712 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key.76cb2290 -> C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key
	I0310 21:19:55.872948   16712 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.key
	I0310 21:19:55.873173   16712 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.crt with IP's: []
	I0310 21:19:56.036709   16712 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.crt ...
	I0310 21:19:56.036709   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.crt: {Name:mk47db70217d54bbafe7f833523f7957f5855eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:19:56.051627   16712 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.key ...
	I0310 21:19:56.051627   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.key: {Name:mk6d09b755ec249d4adc9ea1ab0c1b7b92be716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:19:56.066600   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 21:19:56.066600   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.066600   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 21:19:56.066600   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.066600   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 21:19:56.066600   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.066600   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 21:19:56.072018   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.072018   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 21:19:56.072018   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.072743   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 21:19:56.072743   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.072743   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 21:19:56.073655   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.073655   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 21:19:56.073655   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.073655   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 21:19:56.073655   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.074590   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 21:19:56.074998   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.074998   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 21:19:56.074998   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.075596   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 21:19:56.075596   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.075596   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 21:19:56.075596   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.075596   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 21:19:56.076597   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.076597   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 21:19:56.076597   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.077326   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 21:19:56.077609   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.077609   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 21:19:56.077609   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.077609   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 21:19:56.078598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.078598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 21:19:56.078598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.078598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 21:19:56.078598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.078598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 21:19:56.079715   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.079715   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 21:19:56.079715   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.079715   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 21:19:56.080598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.080598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 21:19:56.080598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.080598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 21:19:56.080598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.080598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 21:19:56.081886   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.081886   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 21:19:56.082598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.082598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 21:19:56.082598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.082598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 21:19:56.082598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.083592   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 21:19:56.083592   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.083592   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 21:19:56.083592   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.083592   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 21:19:56.084614   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.084614   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 21:19:56.084614   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.084614   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 21:19:56.085591   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.085591   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 21:19:56.085591   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.085591   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 21:19:56.085591   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.085591   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 21:19:56.086826   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.086826   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 21:19:56.086826   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.086826   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 21:19:56.087599   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 21:19:56.087599   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 21:19:56.087599   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 21:19:56.087599   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 21:19:56.088591   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 21:19:56.095601   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 21:19:56.279938   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 21:19:56.509637   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 21:19:56.683385   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 21:19:56.904608   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 21:19:57.152633   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 21:19:57.381990   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 21:19:57.881175   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 21:19:58.136733   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 21:19:58.386545   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 21:19:58.655152   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 21:19:58.812971   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 21:19:59.246404   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 21:19:59.579082   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 21:19:59.811144   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 21:20:00.034942   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 21:20:00.397080   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 21:20:00.669933   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 21:20:00.895851   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 21:20:01.166832   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 21:20:01.397656   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 21:20:01.674341   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 21:20:01.882764   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 21:20:02.096289   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 21:20:02.302118   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 21:20:02.549354   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 21:20:02.810313   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 21:20:03.104695   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 21:20:03.356282   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 21:20:03.527855   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 21:20:03.828789   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 21:20:04.016658   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 21:20:04.320386   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 21:20:04.565188   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 21:20:04.906484   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 21:20:05.179192   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 21:20:05.491335   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 21:20:05.818818   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 21:20:06.133703   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 21:20:06.385266   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 21:20:06.832303   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 21:20:07.146678   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 21:20:07.574957   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 21:20:07.877048   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 21:20:08.123859   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 21:20:08.493171   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 21:20:08.889534   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 21:20:10.323778   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 21:20:10.708671   16712 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 21:20:10.996850   16712 ssh_runner.go:149] Run: openssl version
	I0310 21:20:11.169797   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 21:20:11.340484   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 21:20:11.393718   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 21:20:11.407849   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 21:20:11.504423   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:11.653825   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 21:20:11.757588   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 21:20:11.828973   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 21:20:11.839254   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 21:20:11.914128   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:12.044010   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 21:20:12.203988   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 21:20:12.250076   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 21:20:12.269621   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 21:20:12.361200   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:12.462367   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 21:20:12.729501   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 21:20:12.765962   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 21:20:12.779528   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 21:20:12.857026   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:12.965669   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 21:20:13.062179   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 21:20:13.084583   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 21:20:13.101480   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 21:20:13.161901   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:13.249568   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 21:20:13.328300   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 21:20:13.362343   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 21:20:13.382372   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 21:20:13.454731   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:13.527550   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 21:20:13.612026   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:20:13.649984   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:20:13.663938   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:20:13.759810   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 21:20:13.946237   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 21:20:14.181777   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 21:20:14.238213   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 21:20:14.247244   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 21:20:14.339002   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:14.420319   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 21:20:14.522899   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 21:20:14.594790   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 21:20:14.623690   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 21:20:14.687262   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:14.790209   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 21:20:14.887049   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 21:20:14.928800   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 21:20:14.940100   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 21:20:15.011996   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:15.112918   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 21:20:15.198153   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 21:20:15.237897   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 21:20:15.254719   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 21:20:15.309716   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:15.386622   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 21:20:15.449243   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 21:20:15.473701   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 21:20:15.486654   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 21:20:15.527681   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:15.612270   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 21:20:15.713769   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 21:20:15.736861   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 21:20:15.745989   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 21:20:15.804428   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:15.912426   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 21:20:15.985597   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 21:20:16.014245   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 21:20:16.024030   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 21:20:16.072079   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:16.158365   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 21:20:16.266969   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 21:20:16.349464   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 21:20:16.370871   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 21:20:16.468752   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:16.585954   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 21:20:16.727172   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 21:20:16.768426   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 21:20:16.786577   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 21:20:16.846579   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:16.926701   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 21:20:17.089764   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 21:20:17.151070   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 21:20:17.162549   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 21:20:17.239208   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:17.337772   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 21:20:17.452625   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 21:20:17.492902   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 21:20:17.509922   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 21:20:17.566459   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:17.765694   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 21:20:17.963489   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 21:20:18.056678   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 21:20:18.066179   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 21:20:18.217973   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:18.494368   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 21:20:18.630104   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 21:20:18.725942   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 21:20:18.770755   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 21:20:18.883900   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:18.981936   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 21:20:19.225212   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 21:20:19.297172   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 21:20:19.306199   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 21:20:19.409480   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:19.554038   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 21:20:19.730533   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 21:20:19.808841   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 21:20:19.821857   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 21:20:20.042731   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:20.188342   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 21:20:20.356103   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 21:20:20.406331   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 21:20:20.421437   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 21:20:20.491246   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:20.601422   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 21:20:20.778482   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 21:20:20.866721   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 21:20:20.892630   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 21:20:21.075025   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:21.297596   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 21:20:21.714407   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 21:20:21.902985   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 21:20:21.914911   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 21:20:22.066554   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:22.215966   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 21:20:22.370362   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 21:20:22.435246   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 21:20:22.448269   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 21:20:22.556736   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:22.725032   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 21:20:22.985197   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 21:20:23.046525   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 21:20:23.057645   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 21:20:23.209811   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:23.458646   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 21:20:23.601791   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 21:20:23.770557   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 21:20:23.800853   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 21:20:23.957510   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:24.190903   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 21:20:24.420922   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 21:20:24.484795   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 21:20:24.495748   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 21:20:24.594397   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:24.889417   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 21:20:25.062090   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 21:20:25.158783   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 21:20:25.174722   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 21:20:25.303963   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:25.455607   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 21:20:25.596894   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 21:20:25.723103   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 21:20:25.733645   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 21:20:25.890676   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:26.172853   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 21:20:26.407957   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 21:20:26.549726   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 21:20:26.566636   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 21:20:26.672620   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:26.823055   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 21:20:27.282105   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 21:20:27.411501   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 21:20:27.421784   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 21:20:27.538993   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:27.763574   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 21:20:27.951354   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 21:20:27.983962   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 21:20:27.993564   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 21:20:28.084983   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:28.208897   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 21:20:28.382501   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 21:20:28.448214   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 21:20:28.458244   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 21:20:28.649017   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:28.889687   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 21:20:29.173901   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 21:20:29.228275   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 21:20:29.240725   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 21:20:29.316757   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:29.410786   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 21:20:29.686761   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 21:20:29.738289   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 21:20:29.755809   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 21:20:29.844362   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:30.011586   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 21:20:30.187406   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 21:20:30.227919   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 21:20:30.237760   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 21:20:30.432978   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:30.570750   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 21:20:30.831275   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 21:20:30.883286   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 21:20:30.907566   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 21:20:31.014528   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:31.198152   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 21:20:31.468409   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 21:20:31.523629   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 21:20:31.534408   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 21:20:31.772365   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 21:20:31.910272   16712 kubeadm.go:385] StartCluster: {Name:calico-20210310211603-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:calico-20210310211603-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:20:31.920760   16712 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:20:34.024325   16712 ssh_runner.go:189] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (2.1029824s)
	I0310 21:20:34.039937   16712 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 21:20:34.237423   16712 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 21:20:34.472139   16712 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:20:34.487892   16712 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:20:34.694647   16712 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:20:34.695105   16712 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:21:05.945545   16712 out.go:150]   - Generating certificates and keys ...
	I0310 21:21:51.300378   16712 out.go:150]   - Booting up control plane ...
	I0310 21:25:46.096020   16712 out.go:150]   - Configuring RBAC rules ...
	I0310 21:26:16.708840   16712 cni.go:74] Creating CNI manager for "calico"
	I0310 21:26:16.712164   16712 out.go:129] * Configuring Calico (Container Networking Interface) ...
	I0310 21:26:16.712493   16712 cni.go:160] applying CNI manifest using /var/lib/minikube/binaries/v1.20.2/kubectl ...
	I0310 21:26:16.712493   16712 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22544 bytes)
	I0310 21:26:19.445919   16712 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0310 21:29:27.338514   16712 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3m7.8932229s)
	I0310 21:29:27.338805   16712 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0310 21:29:27.352560   16712 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:29:27.358538   16712 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=calico-20210310211603-6496 minikube.k8s.io/updated_at=2021_03_10T21_29_27_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:30:49.430113   16712 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1m22.0913815s)
	I0310 21:30:49.430542   16712 ops.go:34] apiserver oom_adj: -16
	I0310 21:30:49.430416   16712 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2 minikube.k8s.io/name=calico-20210310211603-6496 minikube.k8s.io/updated_at=2021_03_10T21_29_27_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1m22.07182s)
	I0310 21:30:49.430778   16712 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1m22.0784626s)
	I0310 21:30:49.448418   16712 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0310 21:31:05.147147   16712 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (15.6987713s)
	I0310 21:31:05.147147   16712 kubeadm.go:995] duration metric: took 1m37.8080176s to wait for elevateKubeSystemPrivileges.
	I0310 21:31:05.147147   16712 kubeadm.go:387] StartCluster complete in 10m33.2390661s
	I0310 21:31:05.147147   16712 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:31:05.147676   16712 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	I0310 21:31:05.150336   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:31:06.178497   16712 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "calico-20210310211603-6496" rescaled to 1
	I0310 21:31:06.179593   16712 start.go:203] Will wait 5m0s for node up to 
	I0310 21:31:06.179593   16712 addons.go:381] enableAddons start: toEnable=map[], additional=[]
	I0310 21:31:06.179593   16712 addons.go:58] Setting storage-provisioner=true in profile "calico-20210310211603-6496"
	I0310 21:31:06.180123   16712 addons.go:134] Setting addon storage-provisioner=true in "calico-20210310211603-6496"
	W0310 21:31:06.180123   16712 addons.go:143] addon storage-provisioner should already be in state true
	I0310 21:31:06.184165   16712 out.go:129] * Verifying Kubernetes components...
	I0310 21:31:06.179593   16712 addons.go:58] Setting default-storageclass=true in profile "calico-20210310211603-6496"
	I0310 21:31:06.184165   16712 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "calico-20210310211603-6496"
	I0310 21:31:06.180745   16712 host.go:66] Checking if "calico-20210310211603-6496" exists ...
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:31:06.180745   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:31:06.181114   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:31:06.186455   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	I0310 21:31:06.302995   16712 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 21:31:06.706412   16712 cli_runner.go:115] Run: docker container inspect calico-20210310211603-6496 --format={{.State.Status}}
	I0310 21:31:06.711277   16712 cli_runner.go:115] Run: docker container inspect calico-20210310211603-6496 --format={{.State.Status}}
	I0310 21:31:07.300215   16712 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.300215   16712 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.300864   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	I0310 21:31:07.301113   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.1154203s
	I0310 21:31:07.301113   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	I0310 21:31:07.301400   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	I0310 21:31:07.301851   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.1153991s
	I0310 21:31:07.302421   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	I0310 21:31:07.332296   16712 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.332526   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	I0310 21:31:07.332526   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.1452289s
	I0310 21:31:07.332526   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	I0310 21:31:07.340382   16712 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.340917   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	I0310 21:31:07.341368   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.1556756s
	I0310 21:31:07.341650   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	I0310 21:31:07.364124   16712 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.365598   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	I0310 21:31:07.365598   16712 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.365912   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	I0310 21:31:07.366100   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.1556892s
	I0310 21:31:07.366218   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	I0310 21:31:07.365912   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 1.1799742s
	I0310 21:31:07.366382   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	I0310 21:31:07.390292   16712 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.390292   16712 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.390816   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	I0310 21:31:07.390816   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	I0310 21:31:07.391105   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 1.2050236s
	I0310 21:31:07.391105   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.1840352s
	I0310 21:31:07.391105   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	I0310 21:31:07.391105   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	I0310 21:31:07.408545   16712 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.409404   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	I0310 21:31:07.409404   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.2229527s
	I0310 21:31:07.409404   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	I0310 21:31:07.440188   16712 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.441514   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	I0310 21:31:07.442213   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.2568291s
	I0310 21:31:07.442213   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	I0310 21:31:07.443960   16712 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.444143   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	I0310 21:31:07.444143   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 1.2554495s
	I0310 21:31:07.444143   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	I0310 21:31:07.457314   16712 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.458237   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	I0310 21:31:07.458605   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.2721538s
	I0310 21:31:07.458831   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	I0310 21:31:07.478035   16712 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.478035   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	I0310 21:31:07.479401   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.2834304s
	I0310 21:31:07.479401   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	I0310 21:31:07.487845   16712 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.488828   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	I0310 21:31:07.489166   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.2850034s
	I0310 21:31:07.489166   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	I0310 21:31:07.500153   16712 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.501305   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	I0310 21:31:07.502148   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.3070625s
	I0310 21:31:07.502148   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	I0310 21:31:07.537153   16712 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.537153   16712 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.538151   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	I0310 21:31:07.538622   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	I0310 21:31:07.538622   16712 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.539145   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	I0310 21:31:07.539145   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 1.3499527s
	I0310 21:31:07.539145   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	I0310 21:31:07.539145   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 1.332076s
	I0310 21:31:07.539145   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	I0310 21:31:07.539145   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.3349825s
	I0310 21:31:07.539145   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	I0310 21:31:07.590973   16712 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.591965   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	I0310 21:31:07.593970   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.393638s
	I0310 21:31:07.593970   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	I0310 21:31:07.598228   16712 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.599994   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	I0310 21:31:07.602011   16712 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.603004   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	I0310 21:31:07.603004   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.3967484s
	I0310 21:31:07.603004   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	I0310 21:31:07.603982   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.4036502s
	I0310 21:31:07.603982   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	I0310 21:31:07.607965   16712 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.608964   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	I0310 21:31:07.613974   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.4045303s
	I0310 21:31:07.613974   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	I0310 21:31:07.660630   16712 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.661328   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	I0310 21:31:07.661762   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 1.4721842s
	I0310 21:31:07.661762   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	I0310 21:31:07.673969   16712 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.674653   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	I0310 21:31:07.674851   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.4754401s
	I0310 21:31:07.675151   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	I0310 21:31:07.693557   16712 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.694467   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	I0310 21:31:07.695234   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.4879015s
	I0310 21:31:07.695234   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	I0310 21:31:07.702012   16712 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.703193   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	I0310 21:31:07.703645   16712 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.704019   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.4939153s
	I0310 21:31:07.704910   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	I0310 21:31:07.704910   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	I0310 21:31:07.705504   16712 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.705504   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.5086885s
	I0310 21:31:07.705872   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	I0310 21:31:07.705872   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	I0310 21:31:07.705872   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.502099s
	I0310 21:31:07.705872   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	I0310 21:31:07.707370   16712 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.707370   16712 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.707370   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	I0310 21:31:07.707853   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.4997916s
	I0310 21:31:07.707853   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	I0310 21:31:07.707853   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	I0310 21:31:07.708508   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.5038857s
	I0310 21:31:07.708508   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	I0310 21:31:07.711453   16712 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.711892   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	I0310 21:31:07.712414   16712 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.712414   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.5123536s
	I0310 21:31:07.712797   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	I0310 21:31:07.712936   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	I0310 21:31:07.713837   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.5025565s
	I0310 21:31:07.713970   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	I0310 21:31:07.715539   16712 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.716349   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	I0310 21:31:07.716349   16712 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:31:07.716951   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.5302964s
	I0310 21:31:07.716951   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	I0310 21:31:07.716951   16712 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	I0310 21:31:07.717506   16712 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.5080622s
	I0310 21:31:07.717860   16712 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	I0310 21:31:07.717860   16712 cache.go:73] Successfully saved all images to host disk.
	I0310 21:31:07.744477   16712 cli_runner.go:115] Run: docker container inspect calico-20210310211603-6496 --format={{.State.Status}}
	I0310 21:31:07.982835   16712 cli_runner.go:168] Completed: docker container inspect calico-20210310211603-6496 --format={{.State.Status}}: (1.2715613s)
	I0310 21:31:08.036060   16712 cli_runner.go:168] Completed: docker container inspect calico-20210310211603-6496 --format={{.State.Status}}: (1.3296521s)
	I0310 21:31:08.048858   16712 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	I0310 21:31:08.056998   16712 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 21:31:08.057582   16712 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0310 21:31:08.067435   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:08.417270   16712 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:31:08.429190   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:08.727597   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:09.078741   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:10.292256   16712 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (3.9890946s)
	I0310 21:31:10.311264   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:10.926067   16712 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	I0310 21:31:10.926067   16712 pod_ready.go:59] waiting 5m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	I0310 21:31:13.293710   16712 addons.go:134] Setting addon default-storageclass=true in "calico-20210310211603-6496"
	W0310 21:31:13.293869   16712 addons.go:143] addon default-storageclass should already be in state true
	I0310 21:31:13.294257   16712 host.go:66] Checking if "calico-20210310211603-6496" exists ...
	I0310 21:31:13.313521   16712 cli_runner.go:115] Run: docker container inspect calico-20210310211603-6496 --format={{.State.Status}}
	I0310 21:31:13.914223   16712 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	I0310 21:31:13.914435   16712 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0310 21:31:13.925236   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:14.547869   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:14.683408   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:15.920042   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:16.978964   16712 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0310 21:31:17.515398   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:18.539860   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:19.865041   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:21.423405   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:22.533635   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:22.677056   16712 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (14.2598232s)
	I0310 21:31:22.677056   16712 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:31:22.677056   16712 docker.go:429] minikube-local-cache-test:functional-20210106215525-1984 wasn't preloaded
	I0310 21:31:22.677056   16712 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210213143925-7440 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210225231842-5736 minikube-local-cache-test:functional-20210303214129-4588 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional
-20210119220838-6552 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210107190945-8748 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210114204234-6692 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210308233820-5396 minikube-local-cache-test:functional-20210106002159-6856 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210128021318-232 minikube-local-cache-test:functional-20210304002630-1156 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210310191609-6496 minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210126212539-5172 minikube-local-cache-test:functional-20210304184021-4052]
	I0310 21:31:22.776299   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736
	I0310 21:31:22.778287   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052
	I0310 21:31:22.785283   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056
	I0310 21:31:22.785283   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452
	I0310 21:31:22.788352   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172
	I0310 21:31:22.795393   16712 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	I0310 21:31:22.838051   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920
	I0310 21:31:22.874870   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396
	I0310 21:31:22.923408   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120214442-10992
	I0310 21:31:22.923408   16712 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	I0310 21:31:22.925052   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156
	I0310 21:31:22.936092   16712 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	I0310 21:31:22.942088   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024
	I0310 21:31:22.955559   16712 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	I0310 21:31:23.000006   16712 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	I0310 21:31:23.009402   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210119220838-6552
	I0310 21:31:23.034830   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210123004019-5372
	I0310 21:31:23.058398   16712 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	I0310 21:31:23.067728   16712 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	W0310 21:31:23.078350   16712 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:31:23.078892   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210128021318-232
	I0310 21:31:23.115015   16712 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	W0310 21:31:23.120636   16712 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:31:23.134172   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210213143925-7440
	I0310 21:31:23.158697   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115191024-3516
	I0310 21:31:23.160697   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210212145109-352
	I0310 21:31:23.178475   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210301195830-5700
	I0310 21:31:23.179151   16712 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	I0310 21:31:23.202004   16712 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	I0310 21:31:23.207006   16712 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	I0310 21:31:23.209012   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310191609-6496
	I0310 21:31:23.220999   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210303214129-4588
	W0310 21:31:23.244106   16712 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:31:23.257177   16712 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	I0310 21:31:23.266334   16712 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	I0310 21:31:23.291414   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210114204234-6692
	W0310 21:31:23.291414   16712 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:31:23.303253   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120022529-1140
	I0310 21:31:23.306658   16712 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	I0310 21:31:23.359295   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210115023213-8464
	I0310 21:31:23.370660   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219145454-9520
	I0310 21:31:23.378548   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210224014800-800
	I0310 21:31:23.402842   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120175851-7432
	I0310 21:31:23.406820   16712 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:31:23.407821   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	I0310 21:31:23.407821   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:31:23.407821   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:31:23.409822   16712 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:31:23.409822   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	I0310 21:31:23.409822   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:31:23.409822   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:31:23.409822   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210309234032-4944
	I0310 21:31:23.419838   16712 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:31:23.419838   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	I0310 21:31:23.419838   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:31:23.420845   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:31:23.420845   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	I0310 21:31:23.423812   16712 ssh_runner.go:149] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210310083645-5040
	I0310 21:31:23.423812   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	I0310 21:31:23.428824   16712 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:31:23.428824   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	I0310 21:31:23.428824   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:31:23.428824   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	I0310 21:31:23.433836   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	I0310 21:31:23.439835   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	W0310 21:31:23.453260   16712 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:31:23.470959   16712 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0310 21:31:23.477131   16712 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0310 21:31:23.599834   16712 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:31:23.600166   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	I0310 21:31:23.600357   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:31:23.600357   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:31:23.611706   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	I0310 21:31:23.622269   16712 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:31:23.622269   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	I0310 21:31:23.622590   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:31:23.622590   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:31:23.631150   16712 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0310 21:31:23.631150   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	I0310 21:31:23.631150   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:31:23.631150   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:31:23.635886   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	I0310 21:31:23.649602   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.076689   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: NewSession: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.076689   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.076689   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: NewSession: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.076689   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	I0310 21:31:26.076689   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.076689   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: "minikube-local-cache-test:functional-20210224014800-800" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.076689   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	I0310 21:31:26.076689   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.076689   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: "minikube-local-cache-test:functional-20210119220838-6552" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.077704   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: "minikube-local-cache-test:functional-20210128021318-232" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.077967   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: "minikube-local-cache-test:functional-20210115191024-3516" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.077967   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: "minikube-local-cache-test:functional-20210213143925-7440" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.078526   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: "minikube-local-cache-test:functional-20210115023213-8464" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.078407   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:31:26.078770   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:31:26.078770   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:31:26.078770   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:31:26.077967   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:31:26.079002   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:31:26.077967   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: "minikube-local-cache-test:functional-20210303214129-4588" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.079002   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:31:26.079002   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:31:26.076689   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.079762   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: "minikube-local-cache-test:functional-20210219145454-9520" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.079762   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	W0310 21:31:26.076689   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.077704   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: "minikube-local-cache-test:functional-20210310083645-5040" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.076689   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: "minikube-local-cache-test:functional-20210120022529-1140" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	W0310 21:31:26.077704   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.077967   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:31:26.077967   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:31:26.077967   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: "minikube-local-cache-test:functional-20210310191609-6496" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.079762   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:31:26.079762   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:31:26.079762   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.079762   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.079762   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	I0310 21:31:26.079762   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: "minikube-local-cache-test:functional-20210301195830-5700" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.079762   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:31:26.079762   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	I0310 21:31:26.079762   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.079762   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: "minikube-local-cache-test:functional-20210212145109-352" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.079762   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:31:26.079762   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:31:26.079762   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	I0310 21:31:26.079762   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: NewSession: ssh: rejected: connect failed (open failed)
	I0310 21:31:26.079762   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	I0310 21:31:26.079762   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	I0310 21:31:26.079762   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: "minikube-local-cache-test:functional-20210123004019-5372" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.085342   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:31:26.085615   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:31:26.079762   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: "minikube-local-cache-test:functional-20210120175851-7432" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.086177   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:31:26.086177   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:31:26.079762   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:31:26.079762   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:31:26.079762   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:31:26.087702   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:31:26.079762   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: "minikube-local-cache-test:functional-20210309234032-4944" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.087848   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:31:26.087848   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:31:26.079762   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: "minikube-local-cache-test:functional-20210114204234-6692" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	I0310 21:31:26.088493   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:31:26.088493   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:31:26.079762   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:31:26.089767   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:31:26.079762   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	I0310 21:31:26.234186   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.260424   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.345322   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.367545   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.375052   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	I0310 21:31:26.376055   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.382352   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	I0310 21:31:26.385304   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	I0310 21:31:26.414330   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.414330   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.432176   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.439160   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.443609   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.447786   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	I0310 21:31:26.454751   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	I0310 21:31:26.471727   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	I0310 21:31:26.477525   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.482286   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.501469   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.528387   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	I0310 21:31:26.539350   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	I0310 21:31:26.540518   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	I0310 21:31:26.540518   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	I0310 21:31:26.564230   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	I0310 21:31:26.570356   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.603844   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	I0310 21:31:26.605076   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	I0310 21:31:26.614894   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	I0310 21:31:26.614894   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	I0310 21:31:26.615343   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	I0310 21:31:26.615343   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	I0310 21:31:26.645868   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.651794   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.696071   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.730686   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.804354   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.804354   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.806033   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.807834   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.811273   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:26.811800   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:27.789210   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.5550277s)
	I0310 21:31:27.789487   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:27.881738   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.6207407s)
	I0310 21:31:27.881738   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:27.997959   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.5836331s)
	I0310 21:31:27.998838   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.159744   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.7161391s)
	I0310 21:31:28.160073   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.172400   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.8268967s)
	I0310 21:31:28.172719   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.186459   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.8104089s)
	I0310 21:31:28.187359   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.214389   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.7129237s)
	I0310 21:31:28.215185   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.229892   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.7907365s)
	I0310 21:31:28.230180   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.798008s)
	I0310 21:31:28.230180   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.231644   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.6612925s)
	I0310 21:31:28.230180   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.232040   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.369224   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.5648735s)
	I0310 21:31:28.370010   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.371147   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.5633169s)
	I0310 21:31:28.371147   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.385241   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.7390496s)
	I0310 21:31:28.385241   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.393564   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.9792388s)
	I0310 21:31:28.394000   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.400418   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.5888433s)
	I0310 21:31:28.400418   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.403587   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (2.0360472s)
	I0310 21:31:28.403934   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.6732524s)
	I0310 21:31:28.403934   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.403934   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.458565   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.652536s)
	I0310 21:31:28.459116   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.459968   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.9824479s)
	I0310 21:31:28.459968   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.537791   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.8417248s)
	I0310 21:31:28.540719   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (2.0584382s)
	I0310 21:31:28.540719   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.8889298s)
	I0310 21:31:28.540719   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.540719   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.541797   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.541797   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.7300014s)
	I0310 21:31:28.541797   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:28.554764   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.7504144s)
	I0310 21:31:28.554764   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:29.341227   16712 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0310 21:31:30.046910   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:31.580806   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:32.684397   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:31:32.684397   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:31:32.684397   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:31:32.684397   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:31:32.684397   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:31:32.684397   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:31:32.684397   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:31:32.684397   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:31:32.684397   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:31:32.684397   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:31:32.685319   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:31:32.685319   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:31:32.685319   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:31:32.685319   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:31:32.684397   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:31:32.685921   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:31:32.685319   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:31:32.685921   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:31:32.685319   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:31:32.686430   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:31:32.686430   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:31:32.684397   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:31:32.688721   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:31:32.688721   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:31:32.684397   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:31:32.685319   16712 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: needs transfer timed out in 10.000000 seconds
	I0310 21:31:32.691607   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:31:32.691607   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:31:32.691607   16712 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:31:32.691607   16712 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:31:32.819614   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:31:32.826173   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	I0310 21:31:32.832655   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	I0310 21:31:32.835529   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:32.839314   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:32.848732   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:32.880300   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	I0310 21:31:32.880783   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	I0310 21:31:32.889252   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	I0310 21:31:32.892273   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:32.895113   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	I0310 21:31:32.897097   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	I0310 21:31:32.897097   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	I0310 21:31:32.897097   16712 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	I0310 21:31:32.897097   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:32.897097   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:32.906661   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:32.906661   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:32.906661   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:32.915259   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:32.921247   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:33.769785   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:33.798962   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:33.808228   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:33.850019   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:33.863222   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:33.908575   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.0114805s)
	I0310 21:31:33.909097   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:33.939509   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.0424147s)
	I0310 21:31:33.939896   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:33.943836   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.0371773s)
	I0310 21:31:33.944041   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:33.956759   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.0501004s)
	I0310 21:31:33.958158   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:33.979686   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.0639038s)
	I0310 21:31:33.980489   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:34.324650   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:35.885894   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:37.262593   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:31:37.659422   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:31:37.660518   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:31:37.660739   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432 (4096 bytes)
	I0310 21:31:37.668480   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	W0310 21:31:37.710990   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:31:37.711185   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:31:37.711185   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040 (4096 bytes)
	I0310 21:31:37.728631   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:38.344663   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:38.352422   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:38.396862   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:31:39.614056   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:31:39.614400   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:31:39.614614   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944 (4096 bytes)
	I0310 21:31:39.615871   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:40.202397   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:41.535085   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:31:41.555275   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:31:41.555275   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:31:41.555275   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052 (4096 bytes)
	W0310 21:31:41.559038   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:31:41.559358   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:31:41.559904   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452 (4096 bytes)
	I0310 21:31:41.567645   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:41.576574   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:42.168572   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:42.203030   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:42.769064   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:31:43.167595   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:31:43.167595   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:31:43.167595   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396 (4096 bytes)
	I0310 21:31:43.176146   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	W0310 21:31:43.549496   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:31:43.549931   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024: NewSession: new client: new client: ssh: handshake failed: EOF
	I0310 21:31:43.549931   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024 (4096 bytes)
	I0310 21:31:43.563341   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:31:43.854516   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:44.052278   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:44.197467   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:31:45.364638   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:31:46.342750   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:31:46.454592   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0310 21:31:46.697901   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:31:46.807273   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:48.059222   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:49.327821   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:50.372248   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:53.033081   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:54.546152   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:55.816238   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:56.826628   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:57.942937   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:31:59.167725   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:00.428183   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:01.789127   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:03.298908   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:05.729938   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:06.841388   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:08.492942   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:09.785757   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:10.802610   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:11.829804   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:13.836374   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:15.069575   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:22.518770   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:23.595351   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:24.994466   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:26.029507   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:28.107880   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:29.351147   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:30.730779   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:33.886062   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:35.359517   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:36.741390   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:38.055315   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:39.654617   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:41.074867   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:42.476102   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:46.084383   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:47.508116   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:49.017688   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:50.314671   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:51.458198   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:53.448954   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:54.666712   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:32:59.033855   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:00.831753   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:02.306507   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:02.921691   16712 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:33:02.932151   16712 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	I0310 21:33:03.462056   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:05.106878   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:06.200158   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:07.598217   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:09.208508   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:10.333065   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:11.406961   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:12.863826   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:14.314401   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:15.470556   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:16.809378   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:18.354193   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:20.159003   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:21.264844   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:22.289837   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:23.889745   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:25.000345   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:26.512359   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:27.798153   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:29.302380   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:31.160236   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:32.277529   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:33.296109   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001b7f220}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:34.364683   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000620cc0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:35.839579   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00171c1f0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:36.960013   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001102320}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:38.685814   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001aade30}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:40.160465   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001e87050}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:42.530087   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001edb620}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:43.820991   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f03be0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:44.824765   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001100b80}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:45.870668   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001b7e780}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:47.412932   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00078ae10}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:48.839994   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f366d0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:49.959888   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001926360}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:51.437185   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d1e740}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:54.032919   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0015a0dd0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:55.353253   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0015e8ea0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:56.375337   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001100a10}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:57.816606   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0010f24f0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:33:58.962084   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00078bb20}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:00.277105   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0011027f0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:01.314689   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001927a80}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:02.824209   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a2a800}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:04.901094   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001d301e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:06.460644   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d4de00}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:08.007563   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017cc760}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:09.300844   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017df510}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:10.505474   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f361a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:11.945917   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00171d010}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:13.421483   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001f1c190}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:14.771370   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001df86d0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:15.927086   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc002047ab0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:17.550783   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000845570}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:18.641939   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0006211b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:20.833271   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0019aa460}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:21.991280   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001102d10}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:23.640216   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000ded1f0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:24.940495   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001cba830}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:27.619963   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00153c460}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:28.758017   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc002047e90}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:29.773657   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a93d80}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:30.783817   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001e52e20}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:31.839530   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00087c960}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:33.374155   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001825770}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:34.882306   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f36430}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:36.349799   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001c88980}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:38.555280   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a1a3b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:39.937116   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a92170}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:41.382038   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0013ff8c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:42.824786   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0016bcea0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:44.246143   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0015c9070}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:45.333908   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001990520}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:46.890751   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a2ac00}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:48.281817   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0020474f0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:50.622214   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001315920}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:51.828012   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001b7fb50}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:53.054498   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000ca3a10}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:54.070649   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017dec90}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:55.359747   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00198d1c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:56.737704   16712 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210219220622-3920: (3m33.8999587s)
	I0310 21:34:56.737704   16712 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210225231842-5736: (3m33.9618689s)
	I0310 21:34:56.738086   16712 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210220004129-7452: (3m33.9532666s)
	I0310 21:34:56.738086   16712 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304184021-4052: (3m33.9602624s)
	I0310 21:34:56.738086   16712 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210308233820-5396: (3m33.8635167s)
	I0310 21:34:56.738086   16712 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210306072141-12056: (3m33.9532666s)
	I0310 21:34:56.738086   16712 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210126212539-5172: (3m33.9501975s)
	I0310 21:34:56.738086   16712 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120214442-10992: (3m33.8151416s)
	I0310 21:34:56.738086   16712 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210304002630-1156: (3m33.8134967s)
	I0310 21:34:56.738570   16712 ssh_runner.go:189] Completed: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20210120231122-7024: (3m33.7969449s)
	I0310 21:34:56.880945   16712 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3m39.9024599s)
	I0310 21:34:56.884449   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001ae17a0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:56.983706   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: (3m30.6089367s)
	I0310 21:34:56.983706   16712 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3m27.6429267s)
	I0310 21:34:56.983879   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: (3m24.1033617s)
	I0310 21:34:56.983879   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920': No such file or directory
	I0310 21:34:56.984151   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: (3m30.6022544s)
	I0310 21:34:56.984290   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: (3m30.5130175s)
	I0310 21:34:56.984895   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: (3m30.3704551s)
	I0310 21:34:56.985031   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: (3m30.4449674s)
	I0310 21:34:56.985380   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: (3m30.4216046s)
	I0310 21:34:56.985380   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: (3m24.1052509s)
	I0310 21:34:56.985380   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520': No such file or directory
	I0310 21:34:56.985380   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172': No such file or directory
	I0310 21:34:56.985733   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: (3m30.3704679s)
	I0310 21:34:56.985380   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232': No such file or directory
	I0310 21:34:56.985733   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172 (4096 bytes)
	I0310 21:34:56.985733   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: (3m30.4456695s)
	I0310 21:34:56.985961   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496': No such file or directory
	I0310 21:34:56.985961   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232 (4096 bytes)
	I0310 21:34:56.985961   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464': No such file or directory
	I0310 21:34:56.985961   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: (3m30.5316648s)
	I0310 21:34:56.985961   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372': No such file or directory
	I0310 21:34:56.984290   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920 (4096 bytes)
	I0310 21:34:56.985380   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140': No such file or directory
	I0310 21:34:56.986239   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464 (4096 bytes)
	I0310 21:34:56.986239   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140 (4096 bytes)
	I0310 21:34:56.984290   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588': No such file or directory
	I0310 21:34:56.986522   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588 (4096 bytes)
	I0310 21:34:56.983706   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440': No such file or directory
	I0310 21:34:56.985031   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: (3m30.4570986s)
	I0310 21:34:56.990819   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552': No such file or directory
	I0310 21:34:56.991082   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552 (4096 bytes)
	I0310 21:34:56.985380   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692': No such file or directory
	I0310 21:34:56.985733   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520 (4096 bytes)
	I0310 21:34:56.986239   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496 (4096 bytes)
	I0310 21:34:56.986239   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372 (4096 bytes)
	I0310 21:34:56.990819   16712 out.go:129] * Enabled addons: storage-provisioner, default-storageclass
	I0310 21:34:56.991699   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692 (4096 bytes)
	I0310 21:34:56.991699   16712 addons.go:383] enableAddons completed in 3m50.8126136s
	I0310 21:34:56.994521   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440 (4096 bytes)
	W0310 21:34:57.009312   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:34:57.009312   16712 retry.go:31] will retry after 276.165072ms: ssh: rejected: connect failed (open failed)
	W0310 21:34:57.009312   16712 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0310 21:34:57.009312   16712 retry.go:31] will retry after 360.127272ms: ssh: rejected: connect failed (open failed)
	I0310 21:34:57.295681   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:34:57.378949   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:34:58.022448   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:34:58.068862   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:34:58.315862   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00153c260}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:59.658371   16712 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052: (1m56.7264545s)
	I0310 21:34:59.659011   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: (3m33.0445765s)
	I0310 21:34:59.659011   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800': No such file or directory
	I0310 21:34:59.659283   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: (3m26.7626301s)
	I0310 21:34:59.659494   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056': No such file or directory
	I0310 21:34:59.659494   16712 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 from cache
	I0310 21:34:59.659876   16712 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:34:59.659876   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056 (4096 bytes)
	I0310 21:34:59.659011   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: (3m33.2741672s)
	I0310 21:34:59.660295   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516': No such file or directory
	I0310 21:34:59.659011   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: (3m26.7698993s)
	I0310 21:34:59.660536   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516 (4096 bytes)
	I0310 21:34:59.660661   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992': No such file or directory
	I0310 21:34:59.661528   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992 (4096 bytes)
	I0310 21:34:59.659011   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: (3m26.8267999s)
	I0310 21:34:59.661528   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156': No such file or directory
	I0310 21:34:59.658371   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: (3m33.0434875s)
	I0310 21:34:59.662458   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156 (4096 bytes)
	I0310 21:34:59.662458   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352': No such file or directory
	I0310 21:34:59.659011   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: (3m33.0537546s)
	I0310 21:34:59.659011   16712 ssh_runner.go:189] Completed: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: (3m26.7643423s)
	I0310 21:34:59.659494   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800 (4096 bytes)
	I0310 21:34:59.663716   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736': No such file or directory
	I0310 21:34:59.663716   16712 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700': No such file or directory
	I0310 21:34:59.664118   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736 (4096 bytes)
	I0310 21:34:59.664118   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700 (4096 bytes)
	I0310 21:34:59.664118   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352 (4096 bytes)
	I0310 21:34:59.687320   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f02d00}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:34:59.723629   16712 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	I0310 21:34:59.776800   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:34:59.790895   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:34:59.791598   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:34:59.793139   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:34:59.797765   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:34:59.803397   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:34:59.804152   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:34:59.804949   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:34:59.806131   16712 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496
	I0310 21:35:00.656020   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:35:00.697186   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:35:00.715582   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:35:00.764019   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:35:00.766914   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:35:00.777867   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:35:00.791354   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:35:00.795333   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:35:00.809171   16712 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210310211603-6496: (1.0160341s)
	I0310 21:35:00.810722   16712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55193 SSHKeyPath:C:\Users\jenkins\.minikube\machines\calico-20210310211603-6496\id_rsa Username:docker}
	I0310 21:35:01.810377   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000ef5490}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:35:03.353692   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00171cc70}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:35:04.781258   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00153ca40}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	W0310 21:35:04.892076   16712 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0310 21:35:05.845129   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001edb510}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:35:06.946117   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0015e9800}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:35:08.036385   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0013cec70}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:35:09.074654   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0017de170}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:35:10.370488   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00109ad90}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:35:11.688618   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001a00630}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:35:12.863086   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001b53020}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:35:14.296913   16712 pod_ready.go:102] pod "coredns-74ff55c5b-56p29" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:33:32 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:06 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.6 PodIP: PodIPs:[] StartTime:2021-03-10 21:33:32 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting
:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0021527e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0310 21:35:15.008463   16712 pod_ready.go:62] duration metric: took 4m4.0829248s to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	I0310 21:35:15.010333   16712 pod_ready.go:59] waiting 5m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	I0310 21:35:15.190735   16712 pod_ready.go:97] pod "etcd-calico-20210310211603-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:46 +0000 GMT Reason: Message:}
	I0310 21:35:15.190735   16712 pod_ready.go:62] duration metric: took 180.4033ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	I0310 21:35:15.190735   16712 pod_ready.go:59] waiting 5m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	I0310 21:35:15.585839   16712 pod_ready.go:97] pod "kube-apiserver-calico-20210310211603-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:30:48 +0000 GMT Reason: Message:}
	I0310 21:35:15.586102   16712 pod_ready.go:62] duration metric: took 395.3669ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	I0310 21:35:15.586102   16712 pod_ready.go:59] waiting 5m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	I0310 21:35:15.941727   16712 pod_ready.go:97] pod "kube-controller-manager-calico-20210310211603-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:56 +0000 GMT Reason: Message:}
	I0310 21:35:15.943786   16712 pod_ready.go:62] duration metric: took 357.6849ms to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	I0310 21:35:15.943935   16712 pod_ready.go:59] waiting 5m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	I0310 21:35:16.088930   16712 pod_ready.go:97] pod "kube-proxy-bvfcr" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:51 +0000 GMT Reason: Message:}
	I0310 21:35:16.089413   16712 pod_ready.go:62] duration metric: took 144.8023ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	I0310 21:35:16.089413   16712 pod_ready.go:59] waiting 5m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	I0310 21:35:16.285124   16712 pod_ready.go:97] pod "kube-scheduler-calico-20210310211603-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:31:47 +0000 GMT Reason: Message:}
	I0310 21:35:16.285124   16712 pod_ready.go:62] duration metric: took 195.7107ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	I0310 21:35:16.285287   16712 pod_ready.go:39] duration metric: took 4m5.3597511s for extra waiting for kube-system core pods to be Ready ...
	I0310 21:35:16.296751   16712 out.go:129] 
	W0310 21:35:16.296751   16712 out.go:191] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: "kube-dns": "wait pod Ready: timed out waiting for the condition"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: "kube-dns": "wait pod Ready: timed out waiting for the condition"
	W0310 21:35:16.297429   16712 out.go:191] * 
	* 
	W0310 21:35:16.306649   16712 out.go:191] * If the above advice does not help, please let us know: 
	* If the above advice does not help, please let us know: 
	W0310 21:35:16.306649   16712 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:35:16.308685   16712 out.go:129] 

                                                
                                                
** /stderr **
net_test.go:82: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (1153.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (265.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20210310204947-6496 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.20.5-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:196: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-20210310204947-6496 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.20.5-rc.0: exit status 1 (3m13.452715s)

                                                
                                                
-- stdout --
	* [no-preload-20210310204947-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20210310204947-6496 in cluster no-preload-20210310204947-6496
	* Restarting existing docker container for "no-preload-20210310204947-6496" ...
	* Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 21:16:34.629998    8732 out.go:239] Setting OutFile to fd 2796 ...
	I0310 21:16:34.639079    8732 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:16:34.639079    8732 out.go:252] Setting ErrFile to fd 2716...
	I0310 21:16:34.639079    8732 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:16:34.670474    8732 out.go:246] Setting JSON to false
	I0310 21:16:34.673686    8732 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36460,"bootTime":1615374534,"procs":116,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 21:16:34.673686    8732 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 21:16:34.684664    8732 out.go:129] * [no-preload-20210310204947-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 21:16:34.688713    8732 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 21:16:34.691736    8732 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 21:16:35.335343    8732 docker.go:119] docker version: linux-20.10.2
	I0310 21:16:35.342766    8732 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:16:36.447271    8732 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.1045069s)
	I0310 21:16:36.449233    8732 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:16:35.9512391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:16:36.454277    8732 out.go:129] * Using the docker driver based on existing profile
	I0310 21:16:36.454693    8732 start.go:276] selected driver: docker
	I0310 21:16:36.455016    8732 start.go:718] validating driver "docker" against &{Name:no-preload-20210310204947-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.7 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:16:36.455554    8732 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 21:16:37.582610    8732 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:16:38.599163    8732 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0164175s)
	I0310 21:16:38.599481    8732 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:9 ContainersRunning:8 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:16:38.1503189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:16:38.599696    8732 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 21:16:38.600314    8732 start_flags.go:398] config:
	{Name:no-preload-20210310204947-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.7 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:16:38.610707    8732 out.go:129] * Starting control plane node no-preload-20210310204947-6496 in cluster no-preload-20210310204947-6496
	I0310 21:16:39.326361    8732 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 21:16:39.326361    8732 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 21:16:39.326624    8732 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 21:16:39.327083    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4
	I0310 21:16:39.327083    8732 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\config.json ...
	I0310 21:16:39.327358    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns:1.7.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0
	I0310 21:16:39.327960    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2
	I0310 21:16:39.328552    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4
	I0310 21:16:39.328972    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0
	I0310 21:16:39.328972    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0
	I0310 21:16:39.328972    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0
	I0310 21:16:39.328972    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd:3.4.13-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0
	I0310 21:16:39.329289    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0
	I0310 21:16:39.329426    8732 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0
	I0310 21:16:39.359546    8732 cache.go:185] Successfully downloaded all kic artifacts
	I0310 21:16:39.359546    8732 start.go:313] acquiring machines lock for no-preload-20210310204947-6496: {Name:mk5ccb5ca2d8ac74aacc5a5439e34ebf8c484f4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.360549    8732 start.go:317] acquired machines lock for "no-preload-20210310204947-6496" in 1.0034ms
	I0310 21:16:39.361059    8732 start.go:93] Skipping create...Using existing machine configuration
	I0310 21:16:39.361059    8732 fix.go:55] fixHost starting: 
	I0310 21:16:39.559145    8732 cli_runner.go:115] Run: docker container inspect no-preload-20210310204947-6496 --format={{.State.Status}}
	I0310 21:16:39.777868    8732 cache.go:93] acquiring lock: {Name:mk4f17964ab104a7a51fdfe4d0d8adcb99a8f701 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.779091    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0 exists
	I0310 21:16:39.779459    8732 cache.go:82] cache image "k8s.gcr.io/kube-proxy:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-proxy_v1.20.5-rc.0" took 450.0335ms
	I0310 21:16:39.779658    8732 cache.go:66] save to tar file k8s.gcr.io/kube-proxy:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.20.5-rc.0 succeeded
	I0310 21:16:39.792243    8732 cache.go:93] acquiring lock: {Name:mk808ab2b8e2f585b88e9b77052dedca3569e605 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.792918    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0 exists
	I0310 21:16:39.792918    8732 cache.go:82] cache image "k8s.gcr.io/coredns:1.7.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\coredns_1.7.0" took 465.5608ms
	I0310 21:16:39.792918    8732 cache.go:66] save to tar file k8s.gcr.io/coredns:1.7.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\coredns_1.7.0 succeeded
	I0310 21:16:39.806022    8732 cache.go:93] acquiring lock: {Name:mk1bbd52b1d425b987a80d1b42ea65a1daa62351 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.807260    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 exists
	I0310 21:16:39.807585    8732 cache.go:93] acquiring lock: {Name:mk1cd59bbb5d30900e0d5b8983f100ccfb4e941e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.807768    8732 cache.go:82] cache image "k8s.gcr.io/pause:3.2" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\pause_3.2" took 479.6256ms
	I0310 21:16:39.807768    8732 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\pause_3.2 succeeded
	I0310 21:16:39.808108    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0 exists
	I0310 21:16:39.808287    8732 cache.go:82] cache image "k8s.gcr.io/kube-apiserver:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-apiserver_v1.20.5-rc.0" took 478.8622ms
	I0310 21:16:39.809506    8732 cache.go:66] save to tar file k8s.gcr.io/kube-apiserver:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.20.5-rc.0 succeeded
	I0310 21:16:39.843496    8732 cache.go:93] acquiring lock: {Name:mk7dad12c4700ffd6e4a91c1377bd452302d3517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.843782    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0 exists
	I0310 21:16:39.844642    8732 cache.go:82] cache image "k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-controller-manager_v1.20.5-rc.0" took 515.6705ms
	I0310 21:16:39.844642    8732 cache.go:66] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.20.5-rc.0 succeeded
	I0310 21:16:39.846112    8732 cache.go:93] acquiring lock: {Name:mk95277aa1d8baa6ce693324ce93a259561b3b0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.846112    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 exists
	I0310 21:16:39.847112    8732 cache.go:82] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\docker.io\\kubernetesui\\metrics-scraper_v1.0.4" took 520.0303ms
	I0310 21:16:39.847112    8732 cache.go:66] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\metrics-scraper_v1.0.4 succeeded
	I0310 21:16:39.866856    8732 cache.go:93] acquiring lock: {Name:mkf95068147fb9802daffb44f03793cdfc94af80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.867306    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 exists
	I0310 21:16:39.868042    8732 cache.go:82] cache image "gcr.io/k8s-minikube/storage-provisioner:v4" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\gcr.io\\k8s-minikube\\storage-provisioner_v4" took 539.4903ms
	I0310 21:16:39.868192    8732 cache.go:66] save to tar file gcr.io/k8s-minikube/storage-provisioner:v4 -> C:\Users\jenkins\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v4 succeeded
	I0310 21:16:39.868192    8732 cache.go:93] acquiring lock: {Name:mk1b99eb2e55fdc5ddc042a4b3db75d12b25fe0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.868803    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0 exists
	I0310 21:16:39.868979    8732 cache.go:93] acquiring lock: {Name:mk7d69590a92a29aed7b81b57dbd7aa08bae9b7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.868979    8732 cache.go:82] cache image "k8s.gcr.io/kube-scheduler:v1.20.5-rc.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\kube-scheduler_v1.20.5-rc.0" took 540.0075ms
	I0310 21:16:39.868979    8732 cache.go:66] save to tar file k8s.gcr.io/kube-scheduler:v1.20.5-rc.0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.20.5-rc.0 succeeded
	I0310 21:16:39.868979    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0 exists
	I0310 21:16:39.868979    8732 cache.go:82] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\k8s.gcr.io\\etcd_3.4.13-0" took 539.5537ms
	I0310 21:16:39.868979    8732 cache.go:66] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> C:\Users\jenkins\.minikube\cache\images\k8s.gcr.io\etcd_3.4.13-0 succeeded
	I0310 21:16:39.871389    8732 cache.go:93] acquiring lock: {Name:mk33908c5692f6fbcea93524c073786bb1491be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:16:39.871967    8732 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 exists
	I0310 21:16:39.872140    8732 cache.go:82] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\docker.io\\kubernetesui\\dashboard_v2.1.0" took 542.851ms
	I0310 21:16:39.872441    8732 cache.go:66] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> C:\Users\jenkins\.minikube\cache\images\docker.io\kubernetesui\dashboard_v2.1.0 succeeded
	I0310 21:16:39.872571    8732 cache.go:73] Successfully saved all images to host disk.
	I0310 21:16:40.303920    8732 fix.go:108] recreateIfNeeded on no-preload-20210310204947-6496: state=Stopped err=<nil>
	W0310 21:16:40.304496    8732 fix.go:134] unexpected machine state, will restart: <nil>
	I0310 21:16:40.314753    8732 out.go:129] * Restarting existing docker container for "no-preload-20210310204947-6496" ...
	I0310 21:16:40.334499    8732 cli_runner.go:115] Run: docker start no-preload-20210310204947-6496
	I0310 21:16:43.717005    8732 cli_runner.go:168] Completed: docker start no-preload-20210310204947-6496: (3.3825109s)
	I0310 21:16:43.725593    8732 cli_runner.go:115] Run: docker container inspect no-preload-20210310204947-6496 --format={{.State.Status}}
	I0310 21:16:44.337922    8732 kic.go:410] container "no-preload-20210310204947-6496" state is running.
	I0310 21:16:44.350433    8732 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210310204947-6496
	I0310 21:16:45.028858    8732 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\config.json ...
	I0310 21:16:45.033643    8732 machine.go:88] provisioning docker machine ...
	I0310 21:16:45.034260    8732 ubuntu.go:169] provisioning hostname "no-preload-20210310204947-6496"
	I0310 21:16:45.044892    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:16:45.693970    8732 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:45.694402    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	I0310 21:16:45.694402    8732 main.go:121] libmachine: About to run SSH command:
	sudo hostname no-preload-20210310204947-6496 && echo "no-preload-20210310204947-6496" | sudo tee /etc/hostname
	I0310 21:16:45.705001    8732 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:16:48.727564    8732 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0310 21:16:52.806209    8732 main.go:121] libmachine: SSH cmd err, output: <nil>: no-preload-20210310204947-6496
	
	I0310 21:16:52.814403    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:16:53.441942    8732 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:53.442200    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	I0310 21:16:53.442200    8732 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20210310204947-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20210310204947-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20210310204947-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 21:16:54.673131    8732 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:16:54.673131    8732 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 21:16:54.673131    8732 ubuntu.go:177] setting up certificates
	I0310 21:16:54.673131    8732 provision.go:83] configureAuth start
	I0310 21:16:54.680882    8732 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210310204947-6496
	I0310 21:16:55.311936    8732 provision.go:137] copyHostCerts
	I0310 21:16:55.312404    8732 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 21:16:55.312404    8732 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 21:16:55.312404    8732 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 21:16:55.327767    8732 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 21:16:55.327953    8732 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 21:16:55.328258    8732 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 21:16:55.332034    8732 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 21:16:55.332034    8732 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 21:16:55.332621    8732 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 21:16:55.335371    8732 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.no-preload-20210310204947-6496 san=[172.17.0.7 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20210310204947-6496]
	I0310 21:16:55.577282    8732 provision.go:165] copyRemoteCerts
	I0310 21:16:55.589849    8732 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 21:16:55.595491    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:16:56.213951    8732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55198 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 21:16:56.837304    8732 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2474564s)
	I0310 21:16:56.837877    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0310 21:16:57.139185    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 21:16:57.464497    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0310 21:16:57.785260    8732 provision.go:86] duration metric: configureAuth took 3.1121331s
	I0310 21:16:57.785793    8732 ubuntu.go:193] setting minikube options for container-runtime
	I0310 21:16:57.801250    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:16:58.390912    8732 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:58.390912    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	I0310 21:16:58.390912    8732 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 21:16:59.102294    8732 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 21:16:59.102294    8732 ubuntu.go:71] root file system type: overlay
	I0310 21:16:59.102777    8732 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 21:16:59.103141    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:16:59.718624    8732 main.go:121] libmachine: Using SSH client type: native
	I0310 21:16:59.719292    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	I0310 21:16:59.719509    8732 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 21:17:00.813707    8732 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 21:17:00.821329    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:17:01.402792    8732 main.go:121] libmachine: Using SSH client type: native
	I0310 21:17:01.403160    8732 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55198 <nil> <nil>}
	I0310 21:17:01.403693    8732 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 21:17:02.686125    8732 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:17:02.686125    8732 machine.go:91] provisioned docker machine in 17.6518901s
	I0310 21:17:02.686125    8732 start.go:267] post-start starting for "no-preload-20210310204947-6496" (driver="docker")
	I0310 21:17:02.686125    8732 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 21:17:02.706959    8732 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 21:17:02.712857    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:17:03.328656    8732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55198 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 21:17:04.113044    8732 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4060866s)
	I0310 21:17:04.126379    8732 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 21:17:04.194515    8732 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 21:17:04.194656    8732 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 21:17:04.194656    8732 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 21:17:04.194656    8732 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 21:17:04.194903    8732 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 21:17:04.195509    8732 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 21:17:04.199360    8732 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 21:17:04.200809    8732 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 21:17:04.217254    8732 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 21:17:04.292934    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 21:17:04.913283    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 21:17:05.815667    8732 start.go:270] post-start completed in 3.1295462s
	I0310 21:17:05.840331    8732 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 21:17:05.848333    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:17:06.493213    8732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55198 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 21:17:07.240444    8732 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3991092s)
	I0310 21:17:07.240911    8732 fix.go:57] fixHost completed within 27.8798913s
	I0310 21:17:07.240911    8732 start.go:80] releasing machines lock for "no-preload-20210310204947-6496", held for 27.8804014s
	I0310 21:17:07.254996    8732 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20210310204947-6496
	I0310 21:17:07.846414    8732 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 21:17:07.856007    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:17:07.861061    8732 ssh_runner.go:149] Run: systemctl --version
	I0310 21:17:07.869292    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:17:08.484249    8732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55198 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 21:17:08.510889    8732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55198 SSHKeyPath:C:\Users\jenkins\.minikube\machines\no-preload-20210310204947-6496\id_rsa Username:docker}
	I0310 21:17:09.793929    8732 ssh_runner.go:189] Completed: systemctl --version: (1.9328707s)
	I0310 21:17:09.805991    8732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 21:17:11.041171    8732 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service containerd: (1.2351821s)
	I0310 21:17:11.045389    8732 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (3.1986057s)
	I0310 21:17:11.053744    8732 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:17:11.370173    8732 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 21:17:11.381746    8732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 21:17:11.604303    8732 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 21:17:11.934999    8732 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:17:12.307919    8732 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:17:15.034724    8732 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (2.7266351s)
	I0310 21:17:15.049223    8732 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 21:17:15.284083    8732 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 21:17:17.018992    8732 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (1.7349115s)
	I0310 21:17:17.023322    8732 out.go:150] * Preparing Kubernetes v1.20.5-rc.0 on Docker 20.10.3 ...
	I0310 21:17:17.031071    8732 cli_runner.go:115] Run: docker exec -t no-preload-20210310204947-6496 dig +short host.docker.internal
	I0310 21:17:18.396533    8732 cli_runner.go:168] Completed: docker exec -t no-preload-20210310204947-6496 dig +short host.docker.internal: (1.3654639s)
	I0310 21:17:18.396670    8732 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 21:17:18.407852    8732 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 21:17:18.491188    8732 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:17:18.709478    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:17:19.378790    8732 preload.go:97] Checking if preload exists for k8s version v1.20.5-rc.0 and runtime docker
	I0310 21:17:19.386325    8732 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:17:20.447987    8732 ssh_runner.go:189] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.0603579s)
	I0310 21:17:20.448261    8732 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.20.5-rc.0
	k8s.gcr.io/kube-controller-manager:v1.20.5-rc.0
	k8s.gcr.io/kube-proxy:v1.20.5-rc.0
	k8s.gcr.io/kube-scheduler:v1.20.5-rc.0
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:17:20.448261    8732 cache_images.go:73] Images are preloaded, skipping loading
	I0310 21:17:20.456855    8732 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 21:17:23.429911    8732 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (2.9730607s)
	I0310 21:17:23.429911    8732 cni.go:74] Creating CNI manager for ""
	I0310 21:17:23.430413    8732 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	I0310 21:17:23.430413    8732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 21:17:23.430413    8732 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.7 APIServerPort:8443 KubernetesVersion:v1.20.5-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20210310204947-6496 NodeName:no-preload-20210310204947-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.7 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 21:17:23.430755    8732 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "no-preload-20210310204947-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.5-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 21:17:23.430755    8732 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.5-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=no-preload-20210310204947-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0310 21:17:23.443186    8732 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.5-rc.0
	I0310 21:17:23.624956    8732 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 21:17:23.634672    8732 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 21:17:23.823344    8732 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0310 21:17:24.113718    8732 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0310 21:17:24.257875    8732 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1861 bytes)
	I0310 21:17:24.574354    8732 ssh_runner.go:149] Run: grep 172.17.0.7	control-plane.minikube.internal$ /etc/hosts
	I0310 21:17:24.668367    8732 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:17:24.855151    8732 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496 for IP: 172.17.0.7
	I0310 21:17:24.855151    8732 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 21:17:24.855151    8732 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 21:17:24.856305    8732 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\client.key
	I0310 21:17:24.856305    8732 certs.go:275] skipping minikube signed cert generation: C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.key.d9a465bc
	I0310 21:17:24.857032    8732 certs.go:275] skipping aggregator signed cert generation: C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.key
	I0310 21:17:24.858417    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 21:17:24.859032    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.859032    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 21:17:24.859452    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.859452    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 21:17:24.859452    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.859967    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 21:17:24.859967    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.859967    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 21:17:24.859967    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.859967    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 21:17:24.861104    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.861264    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 21:17:24.861471    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.861471    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 21:17:24.862056    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.862056    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 21:17:24.862505    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.862505    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 21:17:24.863016    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.863016    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 21:17:24.863016    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.863016    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 21:17:24.863623    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.863623    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 21:17:24.864065    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.864432    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 21:17:24.865057    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.865057    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 21:17:24.865716    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.865716    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 21:17:24.865716    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.865716    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 21:17:24.866744    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.866744    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 21:17:24.867137    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.867496    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 21:17:24.868015    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.868374    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 21:17:24.869117    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.869506    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 21:17:24.869900    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.870271    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 21:17:24.870647    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.871044    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 21:17:24.871429    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.871429    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 21:17:24.871836    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.872233    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 21:17:24.872641    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.873040    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 21:17:24.873492    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.873492    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 21:17:24.874296    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.874296    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 21:17:24.875119    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.875119    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 21:17:24.876115    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.876115    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 21:17:24.876958    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.877370    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 21:17:24.878107    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.878107    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 21:17:24.878954    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.879353    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 21:17:24.879775    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.880164    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 21:17:24.880164    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.880164    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 21:17:24.880836    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.881338    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 21:17:24.881338    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.881338    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 21:17:24.882308    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.882308    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 21:17:24.882308    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.882308    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 21:17:24.883213    8732 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 21:17:24.883213    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 21:17:24.883213    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 21:17:24.883213    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 21:17:24.884288    8732 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 21:17:24.890178    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 21:17:25.463887    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 21:17:25.916187    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 21:17:26.596000    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\no-preload-20210310204947-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0310 21:17:27.247565    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 21:17:27.727380    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 21:17:28.245596    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 21:17:28.695342    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 21:17:29.090555    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 21:17:29.538220    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 21:17:30.081238    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 21:17:30.579419    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 21:17:30.936103    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 21:17:31.704701    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 21:17:32.311306    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 21:17:32.691685    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 21:17:33.056412    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 21:17:33.574358    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 21:17:34.021596    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 21:17:34.655227    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 21:17:34.993674    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 21:17:35.374491    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 21:17:35.820231    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 21:17:37.441588    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 21:17:38.072462    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 21:17:38.553570    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 21:17:39.814231    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 21:17:40.380385    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 21:17:40.942114    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 21:17:41.438423    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 21:17:41.844693    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 21:17:42.348345    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 21:17:42.689868    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 21:17:43.072124    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 21:17:43.564003    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 21:17:44.093957    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 21:17:44.612040    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 21:17:45.222841    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 21:17:45.487749    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 21:17:45.715522    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 21:17:46.270444    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 21:17:46.794286    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 21:17:47.157428    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 21:17:47.425369    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 21:17:47.889262    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 21:17:48.286577    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 21:17:48.646841    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 21:17:49.198674    8732 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 21:17:49.564687    8732 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 21:17:49.799526    8732 ssh_runner.go:149] Run: openssl version
	I0310 21:17:49.878950    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 21:17:50.008965    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 21:17:50.104033    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 21:17:50.111411    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 21:17:50.242562    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:50.425671    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 21:17:50.569480    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 21:17:50.692987    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 21:17:50.706949    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 21:17:50.847290    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:51.042047    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 21:17:51.216864    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 21:17:51.288339    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 21:17:51.303418    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 21:17:51.379294    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:51.489776    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 21:17:51.598100    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 21:17:51.692288    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 21:17:51.704092    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 21:17:52.151770    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:52.376465    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 21:17:52.484748    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 21:17:52.552102    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 21:17:52.562772    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 21:17:52.653293    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:52.841863    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 21:17:53.112220    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 21:17:53.209195    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 21:17:53.219547    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 21:17:53.328774    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:53.510092    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 21:17:53.705501    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 21:17:53.813132    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 21:17:53.820609    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 21:17:53.911342    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:54.084246    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 21:17:54.251132    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 21:17:54.316279    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 21:17:54.326302    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 21:17:54.455904    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:54.633441    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 21:17:54.781583    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 21:17:54.845253    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 21:17:54.855009    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 21:17:54.972130    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:55.066770    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 21:17:55.227347    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:17:55.301781    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:17:55.312184    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:17:55.473934    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 21:17:55.785002    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 21:17:55.970018    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 21:17:56.041144    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 21:17:56.051242    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 21:17:56.184413    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:56.290498    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 21:17:56.477984    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 21:17:56.566617    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 21:17:56.577831    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 21:17:56.661072    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:56.772361    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 21:17:57.062894    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 21:17:57.150185    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 21:17:57.160892    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 21:17:57.267435    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:57.538013    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 21:17:57.709472    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 21:17:57.768697    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 21:17:57.777643    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 21:17:57.920234    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:58.115435    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 21:17:58.244679    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 21:17:58.311419    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 21:17:58.325867    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 21:17:58.422450    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:58.544126    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 21:17:58.707878    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 21:17:58.781076    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 21:17:58.794778    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 21:17:58.861929    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:59.003895    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 21:17:59.287644    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 21:17:59.387640    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 21:17:59.397534    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 21:17:59.512503    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 21:17:59.740448    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 21:17:59.927303    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 21:17:59.966882    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 21:17:59.983359    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 21:18:00.061908    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:00.280790    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 21:18:00.479027    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 21:18:00.530960    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 21:18:00.541985    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 21:18:00.654837    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:00.813979    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 21:18:01.042479    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 21:18:01.100422    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 21:18:01.112765    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 21:18:01.290395    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:01.493949    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 21:18:01.617085    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 21:18:01.713538    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 21:18:01.724159    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 21:18:01.872103    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:01.989901    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 21:18:02.171030    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 21:18:02.290900    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 21:18:02.302715    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 21:18:02.410133    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:02.628781    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 21:18:02.814583    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 21:18:02.877937    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 21:18:02.890695    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 21:18:03.065217    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:03.303761    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 21:18:03.518551    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 21:18:03.632581    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 21:18:03.639010    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 21:18:03.750214    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:04.062806    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 21:18:04.342314    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 21:18:04.418886    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 21:18:04.436546    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 21:18:04.661557    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:04.821368    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 21:18:05.141582    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 21:18:05.216180    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 21:18:05.229390    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 21:18:05.330765    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:05.421064    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 21:18:05.689753    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 21:18:05.728861    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 21:18:05.741344    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 21:18:05.832075    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:05.985696    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 21:18:06.138820    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 21:18:06.179160    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 21:18:06.192551    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 21:18:06.263571    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:06.454135    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 21:18:06.601270    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 21:18:06.656614    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 21:18:06.676741    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 21:18:06.816492    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:06.946813    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 21:18:07.091158    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 21:18:07.144633    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 21:18:07.155767    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 21:18:07.282997    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:07.566691    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 21:18:07.844168    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 21:18:07.904456    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 21:18:07.917738    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 21:18:08.131298    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:08.278010    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 21:18:08.502342    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 21:18:08.624227    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 21:18:08.645199    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 21:18:08.925557    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:09.081846    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 21:18:09.193190    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 21:18:09.282968    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 21:18:09.290182    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 21:18:09.516745    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:09.834587    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 21:18:10.002129    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 21:18:10.117049    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 21:18:10.121052    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 21:18:10.205943    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:10.371454    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 21:18:10.530599    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 21:18:10.591304    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 21:18:10.600921    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 21:18:10.768468    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:11.036933    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 21:18:11.221494    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 21:18:11.314385    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 21:18:11.322949    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 21:18:11.481690    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:11.717194    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 21:18:11.949359    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 21:18:12.081537    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 21:18:12.095427    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 21:18:12.231820    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:12.372967    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 21:18:12.542156    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 21:18:12.610908    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 21:18:12.620507    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 21:18:12.689030    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:12.827957    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 21:18:12.971893    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 21:18:13.064924    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 21:18:13.081842    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 21:18:13.271099    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:13.458684    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 21:18:13.665559    8732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 21:18:13.738200    8732 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 21:18:13.738200    8732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 21:18:13.869034    8732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 21:18:13.990128    8732 kubeadm.go:385] StartCluster: {Name:no-preload-20210310204947-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.5-rc.0 ClusterName:no-preload-20210310204947-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.7 Port:8443 KubernetesVersion:v1.20.5-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:18:14.006290    8732 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:18:15.382126    8732 ssh_runner.go:189] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (1.3754034s)
	I0310 21:18:15.394627    8732 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 21:18:15.568029    8732 kubeadm.go:396] found existing configuration files, will attempt cluster restart
	I0310 21:18:15.568166    8732 kubeadm.go:594] restartCluster start
	I0310 21:18:15.582546    8732 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0310 21:18:15.664083    8732 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0310 21:18:15.673439    8732 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20210310204947-6496
	I0310 21:18:16.360732    8732 kubeconfig.go:117] verify returned: extract IP: "no-preload-20210310204947-6496" does not appear in C:\Users\jenkins/.kube/config
	I0310 21:18:16.362443    8732 kubeconfig.go:128] "no-preload-20210310204947-6496" context is missing from C:\Users\jenkins/.kube/config - will repair!
	I0310 21:18:16.366049    8732 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:18:16.430378    8732 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0310 21:18:16.575352    8732 api_server.go:146] Checking apiserver status ...
	I0310 21:18:16.586676    8732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0310 21:18:16.871151    8732 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0310 21:18:16.871317    8732 kubeadm.go:573] needs reconfigure: apiserver in state Stopped
	I0310 21:18:16.871813    8732 kubeadm.go:1042] stopping kube-system containers ...
	I0310 21:18:16.873966    8732 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:18:18.011651    8732 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (1.1376863s)
	I0310 21:18:18.011651    8732 docker.go:261] Stopping containers: [f4f5dad286f7 e63ae4a86183 5e2289334650 3c5021469e90 75bbb8211a3e ba5aace99e81 3e2455bc2954 81a39b1bd4f1 920fc93981c0]
	I0310 21:18:18.015822    8732 ssh_runner.go:149] Run: docker stop f4f5dad286f7 e63ae4a86183 5e2289334650 3c5021469e90 75bbb8211a3e ba5aace99e81 3e2455bc2954 81a39b1bd4f1 920fc93981c0
	I0310 21:18:18.816664    8732 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0310 21:18:19.284354    8732 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:18:19.592163    8732 kubeadm.go:153] found existing configuration files:
	-rw------- 1 root root 5611 Mar 10 21:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5630 Mar 10 21:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5763 Mar 10 21:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5578 Mar 10 21:07 /etc/kubernetes/scheduler.conf
	
	I0310 21:18:19.601745    8732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0310 21:18:19.738385    8732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0310 21:18:19.901348    8732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0310 21:18:20.075659    8732 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0310 21:18:20.085927    8732 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0310 21:18:20.283137    8732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0310 21:18:20.509954    8732 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0310 21:18:20.522107    8732 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0310 21:18:20.672365    8732 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 21:18:20.794925    8732 kubeadm.go:670] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0310 21:18:20.795305    8732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:18:26.488531    8732 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": (5.693234s)
	I0310 21:18:26.488830    8732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:18:43.367921    8732 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (16.8789734s)
	I0310 21:18:43.370172    8732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:18:50.758750    8732 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml": (7.3884189s)
	I0310 21:18:50.759678    8732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:19:00.517642    8732 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml": (9.7579772s)
	I0310 21:19:00.517642    8732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0310 21:19:06.097925    8732 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.5-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml": (5.5802907s)
	I0310 21:19:06.098239    8732 kubeadm.go:687] waiting for restarted kubelet to initialise ...
	I0310 21:19:06.117744    8732 retry.go:31] will retry after 276.165072ms: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:06.402460    8732 retry.go:31] will retry after 540.190908ms: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:06.950707    8732 retry.go:31] will retry after 655.06503ms: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:07.607896    8732 retry.go:31] will retry after 791.196345ms: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:08.406301    8732 retry.go:31] will retry after 1.170244332s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:09.584519    8732 retry.go:31] will retry after 2.253109428s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:11.845424    8732 retry.go:31] will retry after 1.610739793s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:13.470920    8732 retry.go:31] will retry after 2.804311738s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:16.288415    8732 retry.go:31] will retry after 3.824918958s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:20.121056    8732 retry.go:31] will retry after 7.69743562s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:27.835487    8732 retry.go:31] will retry after 14.635568968s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	I0310 21:19:42.486090    8732 retry.go:31] will retry after 28.406662371s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF

                                                
                                                
** /stderr **
start_stop_delete_test.go:199: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-20210310204947-6496 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.20.5-rc.0": exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect no-preload-20210310204947-6496
helpers_test.go:231: (dbg) docker inspect no-preload-20210310204947-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966",
	        "Created": "2021-03-10T20:50:09.5134495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307110,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T21:16:43.6370912Z",
	            "FinishedAt": "2021-03-10T21:16:28.8920811Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/hosts",
	        "LogPath": "/var/lib/docker/containers/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966/ba4c8f2ea9992daf32a90064d8be793a04b4c15ede3861cbe7cc5b1d10bd3966-json.log",
	        "Name": "/no-preload-20210310204947-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210310204947-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db9786265e9f068e04d70e18087b62c096e075ee52427c1e4a3908dea5608887/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210310204947-6496",
	                "Source": "/var/lib/docker/volumes/no-preload-20210310204947-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210310204947-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210310204947-6496",
	                "name.minikube.sigs.k8s.io": "no-preload-20210310204947-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9645af2bbdbf56503ff416503acd0fa48e836e2cffc1de9211ba48a0ee1263f4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55198"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55197"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55194"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55196"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55195"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9645af2bbdbf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "60bfd05cdb686b44adfde7c8048b7a70d326970d87388e1784962bee9eb0ffc8",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.7",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:07",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "60bfd05cdb686b44adfde7c8048b7a70d326970d87388e1784962bee9eb0ffc8",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.7",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:07",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210310204947-6496 -n no-preload-20210310204947-6496
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210310204947-6496 -n no-preload-20210310204947-6496: exit status 2 (4.0897614s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:235: status error: exit status 2 (may be ok)
helpers_test.go:240: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-20210310204947-6496 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p no-preload-20210310204947-6496 logs -n 25: exit status 110 (1m4.7879534s)

                                                
                                                
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 21:16:49 UTC, end at Wed 2021-03-10 21:20:06 UTC. --
	* Mar 10 21:16:50 no-preload-20210310204947-6496 systemd[1]: Starting Docker Application Container Engine...
	* Mar 10 21:16:51 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:51.587018800Z" level=info msg="Starting up"
	* Mar 10 21:16:51 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:51.674948700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 21:16:51 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:51.675735300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 21:16:51 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:51.675949200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 21:16:51 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:51.676890700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 21:16:51 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:51.724815300Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Mar 10 21:16:51 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:51.725793600Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Mar 10 21:16:51 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:51.731635700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 21:16:51 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:51.733146800Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 21:16:51 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:51.961879600Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 21:16:52 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:52.786950600Z" level=info msg="Loading containers: start."
	* Mar 10 21:16:59 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:59.865740200Z" level=info msg="Removing stale sandbox 845b397b2a61e93810d928c29df754523298f82e567be3b661848f50fb8d7489 (3c5021469e90f11a9f007f20106695b1c3f3c6120bbe4712b3b017120a795fc8)"
	* Mar 10 21:16:59 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:16:59.916150500Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint ad3c0ab40647c2d01246cab86ddb2953fa6552d6e9bfc2d5bab85bb1a5792b7a da994f3a731d0710b10f7c9a022e26c4f95c0257912b4e61c7582acf247e6259], retrying...."
	* Mar 10 21:17:02 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:17:02.930054000Z" level=info msg="Removing stale sandbox ac6dc25f9afea961582ae6d9a375ec6300e6598437b608132f97157367ca58e4 (920fc93981c03745943c68047ebd567378a4fb71db133ca0b7a67ee583a2bb4a)"
	* Mar 10 21:17:03 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:17:03.050443400Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint ad3c0ab40647c2d01246cab86ddb2953fa6552d6e9bfc2d5bab85bb1a5792b7a 115efac2f96e984927c7259848623dda51a32fff3f3b54611aa035fd4fc273ab], retrying...."
	* Mar 10 21:17:05 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:17:05.502846300Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 21:17:08 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:17:08.149558400Z" level=info msg="Loading containers: done."
	* Mar 10 21:17:08 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:17:08.815173100Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 21:17:08 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:17:08.818055000Z" level=info msg="Daemon has completed initialization"
	* Mar 10 21:17:09 no-preload-20210310204947-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 21:17:10 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:17:10.017900400Z" level=info msg="API listen on [::]:2376"
	* Mar 10 21:17:10 no-preload-20210310204947-6496 dockerd[214]: time="2021-03-10T21:17:10.985946600Z" level=info msg="API listen on /var/run/docker.sock"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* f4f5dad286f7d       a95b4e4b41d89       7 minutes ago       Exited              kube-controller-manager   1                   75bbb8211a3e3
	* e63ae4a86183c       4968524da7559       10 minutes ago      Exited              kube-scheduler            0                   3c5021469e90f
	* ba5aace99e81d       17a1e6e90a9b4       11 minutes ago      Exited              kube-apiserver            0                   3e2455bc2954e
	* 81a39b1bd4f1b       0369cf4303ffd       11 minutes ago      Exited              etcd                      0                   920fc93981c03
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [81a39b1bd4f1] <==
	* 2021-03-10 21:15:27.621286 W | etcdserver: request "header:<ID:3266086224885917320 > lease_revoke:<id:2d53781df81f262b>" with result "size:29" took too long (407.6128ms) to execute
	* 2021-03-10 21:15:27.824289 W | etcdserver: request "header:<ID:3266086224885917321 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-public/default\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-public/default\" value_size:114 >> failure:<>>" with result "size:16" took too long (202.7968ms) to execute
	* 2021-03-10 21:15:28.286353 W | etcdserver: request "header:<ID:3266086224885917322 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/configmaps/kube-public/kube-root-ca.crt\" mod_revision:0 > success:<request_put:<key:\"/registry/configmaps/kube-public/kube-root-ca.crt\" value_size:1342 >> failure:<>>" with result "size:16" took too long (426.935ms) to execute
	* 2021-03-10 21:15:28.557381 W | etcdserver: read-only range request "key:\"/registry/minions/no-preload-20210310204947-6496\" " with result "range_response_count:1 size:3720" took too long (657.8913ms) to execute
	* 2021-03-10 21:15:28.557907 W | etcdserver: read-only range request "key:\"/registry/minions/no-preload-20210310204947-6496\" " with result "range_response_count:1 size:3720" took too long (744.5362ms) to execute
	* 2021-03-10 21:15:28.568825 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-public/default\" " with result "range_response_count:1 size:181" took too long (672.2014ms) to execute
	* 2021-03-10 21:15:28.595051 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.6140439s) to execute
	* 2021-03-10 21:15:28.617305 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:129" took too long (1.5935203s) to execute
	* 2021-03-10 21:15:28.665194 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-20210310204947-6496\" " with result "range_response_count:1 size:6935" took too long (321.5325ms) to execute
	* 2021-03-10 21:15:28.668346 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/kube-node-lease/\" range_end:\"/registry/resourcequotas/kube-node-lease0\" " with result "range_response_count:0 size:5" took too long (486.0067ms) to execute
	* 2021-03-10 21:15:28.669300 W | etcdserver: read-only range request "key:\"/registry/minions/no-preload-20210310204947-6496\" " with result "range_response_count:1 size:3720" took too long (845.9906ms) to execute
	* 2021-03-10 21:15:28.683163 W | etcdserver: read-only range request "key:\"/registry/minions/no-preload-20210310204947-6496\" " with result "range_response_count:1 size:3720" took too long (874.7711ms) to execute
	* 2021-03-10 21:15:29.984775 W | etcdserver: request "header:<ID:3266086224885917355 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:410 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:145 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >>" with result "size:16" took too long (158.2429ms) to execute
	* 2021-03-10 21:15:30.033162 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:482" took too long (178.1235ms) to execute
	* 2021-03-10 21:15:35.045282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:15:37.172967 W | etcdserver: read-only range request "key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true " with result "range_response_count:0 size:7" took too long (182.5411ms) to execute
	* 2021-03-10 21:15:44.936041 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:15:52.443286 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:129" took too long (176.0083ms) to execute
	* 2021-03-10 21:15:52.820874 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:482" took too long (175.5426ms) to execute
	* 2021-03-10 21:15:54.265463 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:16:04.269658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:16:09.880682 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:129" took too long (115.4901ms) to execute
	* 2021-03-10 21:16:10.587363 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2021/03/10 21:16:11 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 2021-03-10 21:16:11.356001 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> kernel <==
	*  21:20:30 up  2:20,  0 users,  load average: 119.13, 135.54, 142.17
	* Linux no-preload-20210310204947-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [ba5aace99e81] <==
	* W0310 21:16:20.349925       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.415878       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.416058       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.418352       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.431745       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.436560       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.493010       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.496184       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.546504       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.548629       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.571651       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.572052       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.616284       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.635030       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.680917       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.682797       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.683335       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.709512       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.711649       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.733403       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.786673       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.815275       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.815695       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.825455       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W0310 21:16:20.830791       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-controller-manager [f4f5dad286f7] <==
	* I0310 21:15:17.447124       1 shared_informer.go:247] Caches are synced for PVC protection 
	* I0310 21:15:17.447176       1 shared_informer.go:247] Caches are synced for ReplicationController 
	* I0310 21:15:17.447199       1 shared_informer.go:247] Caches are synced for GC 
	* I0310 21:15:17.447297       1 shared_informer.go:247] Caches are synced for stateful set 
	* I0310 21:15:17.447323       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	* I0310 21:15:17.463828       1 shared_informer.go:247] Caches are synced for attach detach 
	* I0310 21:15:17.517053       1 shared_informer.go:247] Caches are synced for taint 
	* I0310 21:15:17.517285       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* I0310 21:15:17.517892       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I0310 21:15:17.518738       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 21:15:17.518756       1 disruption.go:339] Sending events to api server.
	* I0310 21:15:17.519019       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* W0310 21:15:17.528200       1 node_lifecycle_controller.go:1044] Missing timestamp for Node no-preload-20210310204947-6496. Assuming now as a timestamp.
	* I0310 21:15:17.529115       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I0310 21:15:17.587838       1 shared_informer.go:247] Caches are synced for deployment 
	* I0310 21:15:17.673666       1 event.go:291] "Event occurred" object="no-preload-20210310204947-6496" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-20210310204947-6496 event: Registered Node no-preload-20210310204947-6496 in Controller"
	* I0310 21:15:18.692164       1 range_allocator.go:373] Set node no-preload-20210310204947-6496 PodCIDR to [10.244.0.0/24]
	* I0310 21:15:19.175725       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:15:19.205851       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:15:20.409290       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 21:15:24.322318       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:15:24.322698       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 21:15:24.343963       1 shared_informer.go:247] Caches are synced for garbage collector 
	* E0310 21:15:25.356854       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	* E0310 21:15:26.981263       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	* 
	* ==> kube-scheduler [e63ae4a86183] <==
	* E0310 21:11:16.661021       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 21:11:16.706185       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 21:11:16.768181       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:11:17.131019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 21:11:17.292236       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:11:17.450817       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 21:11:17.467726       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 21:11:17.592191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:11:17.988487       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 21:11:18.195960       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:11:18.684693       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 21:11:18.779792       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 21:11:25.479154       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:11:26.281788       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 21:11:26.336509       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 21:11:26.427694       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:11:28.270090       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:11:28.290220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:11:28.290603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 21:11:28.313088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 21:11:30.102979       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 21:11:43.947264       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I0310 21:12:24.055158       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* http2: server: error reading preface from client 127.0.0.1:34868: read tcp 127.0.0.1:10259->127.0.0.1:34868: read: connection reset by peer
	* W0310 21:16:15.515882       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: very short watch: k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Unexpected watch close - watch lasted less than a second and no items received
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 21:16:49 UTC, end at Wed 2021-03-10 21:20:47 UTC. --
	* Mar 10 21:20:43 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:42.701727    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:43 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:43.201449    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:43 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:43.245695    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:43 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:43.245733    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:43 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:43.152072    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:43 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:43.683535    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:43 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:43.850044    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:43 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:43.850098    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:44 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:44.150243    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:44 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:44.150305    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:44.680823    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: E0310 21:20:44.981679    1367 kubelet_node_status.go:93] Unable to register node "no-preload-20210310204947-6496" with API server: Post "https://control-plane.minikube.internal:8443/api/v1/nodes": read tcp 172.17.0.7:37122->172.17.0.7:8443: read: connection reset by peer
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:45.052685    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: E0310 21:20:45.057644    1367 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210310204947-6496.166b17e0e68665cc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"no-preload-20210310204947-6496", UID:"no-preload-20210310204947-6496", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"no-prelo
ad-20210310204947-6496"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc00a6bd282fd01cc, ext:14985552701, loc:(*time.Location)(0x70e90a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc00a6bd282fd01cc, ext:14985552701, loc:(*time.Location)(0x70e90a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 172.17.0.7:8443: connect: connection refused'(may retry after sleeping)
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:45.173700    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:45.173739    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:45.185863    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:45.185891    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:45.680693    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:45.884881    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: E0310 21:20:45.984240    1367 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-20210310204947-6496?timeout=10s": dial tcp 172.17.0.7:8443: connect: connection refused
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:46.681736    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:46 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:46.882966    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:47 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:47.839953    1367 kubelet.go:449] kubelet nodes not sync
	* Mar 10 21:20:47 no-preload-20210310204947-6496 kubelet[1367]: I0310 21:20:47.939652    1367 kubelet.go:449] kubelet nodes not sync
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                      Args                      |                    Profile                     |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p nospam-20210310201637-6496                  | nospam-20210310201637-6496                     | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:44:37 GMT | Wed, 10 Mar 2021 20:44:59 GMT |
	| -p      | docker-flags-20210310201637-6496               | docker-flags-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:47:18 GMT | Wed, 10 Mar 2021 20:49:03 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | docker-flags-20210310201637-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:21 GMT | Wed, 10 Mar 2021 20:49:47 GMT |
	|         | docker-flags-20210310201637-6496               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | force-systemd-env-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:41 GMT | Wed, 10 Mar 2021 20:50:17 GMT |
	|         | force-systemd-env-20210310201637-6496          |                                                |                         |         |                               |                               |
	| -p      | cert-options-20210310203249-6496               | cert-options-20210310203249-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:50:36 GMT | Wed, 10 Mar 2021 20:50:43 GMT |
	|         | ssh openssl x509 -text -noout -in              |                                                |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt          |                                                |                         |         |                               |                               |
	| delete  | -p                                             | cert-options-20210310203249-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:10 GMT | Wed, 10 Mar 2021 20:51:56 GMT |
	|         | cert-options-20210310203249-6496               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | disable-driver-mounts-20210310205156-6496      | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:57 GMT | Wed, 10 Mar 2021 20:52:02 GMT |
	|         | disable-driver-mounts-20210310205156-6496      |                                                |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210310203447-6496         | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:53:03 GMT | Wed, 10 Mar 2021 20:53:44 GMT |
	|         | ssh docker info --format                       |                                                |                         |         |                               |                               |
	|         |                               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:54:07 GMT | Wed, 10 Mar 2021 20:54:36 GMT |
	|         | force-systemd-flag-20210310203447-6496         |                                                |                         |         |                               |                               |
	| stop    | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:19 GMT | Wed, 10 Mar 2021 21:02:40 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:42 GMT | Wed, 10 Mar 2021 21:02:42 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496                | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:07:05 GMT | Wed, 10 Mar 2021 21:08:33 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| start   | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:52:21 GMT | Wed, 10 Mar 2021 21:09:23 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                |                                                |                         |         |                               |                               |
	|         | -v=1 --driver=docker                           |                                                |                         |         |                               |                               |
	| logs    | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:09:23 GMT | Wed, 10 Mar 2021 21:10:51 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:10:52 GMT | Wed, 10 Mar 2021 21:11:13 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | running-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:45 GMT | Wed, 10 Mar 2021 21:12:11 GMT |
	|         | running-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| stop    | -p                                             | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:12:38 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:40 GMT | Wed, 10 Mar 2021 21:12:41 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	| -p      | kubernetes-upgrade-20210310201637-6496         | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:50 GMT | Wed, 10 Mar 2021 21:15:02 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:15 GMT | Wed, 10 Mar 2021 21:15:46 GMT |
	|         | kubernetes-upgrade-20210310201637-6496         |                                                |                         |         |                               |                               |
	| delete  | -p                                             | missing-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:38 GMT | Wed, 10 Mar 2021 21:16:03 GMT |
	|         | missing-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| -p      | default-k8s-different-port-20210310205202-6496 | default-k8s-different-port-20210310205202-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:16:15 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| stop    | -p                                             | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:57 GMT | Wed, 10 Mar 2021 21:16:31 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:16:33 GMT | Wed, 10 Mar 2021 21:16:34 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	| delete  | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:18:53 GMT | Wed, 10 Mar 2021 21:19:16 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 21:19:17
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 21:19:17.185426   13364 out.go:239] Setting OutFile to fd 3016 ...
	* I0310 21:19:17.186416   13364 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:19:17.186416   13364 out.go:252] Setting ErrFile to fd 2968...
	* I0310 21:19:17.186416   13364 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:19:17.201428   13364 out.go:246] Setting JSON to false
	* I0310 21:19:17.208505   13364 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36623,"bootTime":1615374534,"procs":114,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 21:19:17.209348   13364 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 21:19:17.611459   13364 out.go:129] * [custom-weave-20210310211916-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 21:19:16.288415    8732 retry.go:31] will retry after 3.824918958s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:19:17.666878   13364 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 21:19:17.672236   13364 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 21:19:18.452811   13364 docker.go:119] docker version: linux-20.10.2
	* I0310 21:19:18.466511   13364 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:19:19.515678   13364 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0489584s)
	* I0310 21:19:19.516849   13364 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:19:19.0331326 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:19:19.503369   18444 pod_ready.go:89] error listing pods in "kube-system" namespace, will retry: Get "https://127.0.0.1:55180/api/v1/namespaces/kube-system/pods": net/http: TLS handshake timeout
	* I0310 21:19:19.748915   13364 out.go:129] * Using the docker driver based on user configuration
	* I0310 21:19:19.749572   13364 start.go:276] selected driver: docker
	* I0310 21:19:19.749892   13364 start.go:718] validating driver "docker" against <nil>
	* I0310 21:19:19.749892   13364 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 21:19:21.690035   13364 out.go:129] 
	* W0310 21:19:21.690910   13364 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	* W0310 21:19:21.691851   13364 out.go:191] * Suggestion: 
	* 
	*     1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	*     2. Click "Settings"
	*     3. Click "Resources"
	*     4. Increase "Memory" slider bar to 2.25 GB or higher
	*     5. Click "Apply & Restart"
	* W0310 21:19:21.691851   13364 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* I0310 21:19:21.694252   13364 out.go:129] 
	* I0310 21:19:21.718817   13364 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:19:22.642994   13364 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:19:22.2236182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:19:22.642994   13364 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 21:19:22.644019   13364 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 21:19:22.644019   13364 cni.go:74] Creating CNI manager for "testdata\\weavenet.yaml"
	* I0310 21:19:22.644019   13364 start_flags.go:393] Found "testdata\\weavenet.yaml" CNI - setting NetworkPlugin=cni
	* I0310 21:19:22.644019   13364 start_flags.go:398] config:
	* {Name:custom-weave-20210310211916-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210310211916-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:19:22.648329   13364 out.go:129] * Starting control plane node custom-weave-20210310211916-6496 in cluster custom-weave-20210310211916-6496
	* I0310 21:19:23.284628   13364 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 21:19:23.284628   13364 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 21:19:23.286039   13364 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:19:23.286039   13364 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:19:23.286039   13364 cache.go:54] Caching tarball of preloaded images
	* I0310 21:19:23.286039   13364 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 21:19:23.286772   13364 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 21:19:23.286772   13364 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\config.json ...
	* I0310 21:19:23.286772   13364 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\config.json: {Name:mkc476d656886ec8725c6298ca7e5a7f8fe30c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:19:23.301273   13364 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 21:19:23.301273   13364 start.go:313] acquiring machines lock for custom-weave-20210310211916-6496: {Name:mk446b3e268c75dc4305737c572edd1081e0a5b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:19:23.301273   13364 start.go:317] acquired machines lock for "custom-weave-20210310211916-6496" in 0s
	* I0310 21:19:23.301273   13364 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20210310211916-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210310211916-6496 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 21:19:23.309815   13364 start.go:126] createHost starting for "" (driver="docker")
	* I0310 21:19:20.121056    8732 retry.go:31] will retry after 7.69743562s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:19:23.324305   13364 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	* I0310 21:19:23.326940   13364 start.go:160] libmachine.API.Create for "custom-weave-20210310211916-6496" (driver="docker")
	* I0310 21:19:23.326940   13364 client.go:168] LocalClient.Create starting
	* I0310 21:19:23.326940   13364 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	* I0310 21:19:23.326940   13364 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:19:23.326940   13364 main.go:121] libmachine: Parsing certificate...
	* I0310 21:19:23.326940   13364 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	* I0310 21:19:23.326940   13364 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:19:23.326940   13364 main.go:121] libmachine: Parsing certificate...
	* I0310 21:19:23.355621   13364 cli_runner.go:115] Run: docker network inspect custom-weave-20210310211916-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 21:19:23.945979   13364 cli_runner.go:162] docker network inspect custom-weave-20210310211916-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 21:19:23.963064   13364 network_create.go:240] running [docker network inspect custom-weave-20210310211916-6496] to gather additional debugging logs...
	* I0310 21:19:23.963064   13364 cli_runner.go:115] Run: docker network inspect custom-weave-20210310211916-6496
	* W0310 21:19:24.531820   13364 cli_runner.go:162] docker network inspect custom-weave-20210310211916-6496 returned with exit code 1
	* I0310 21:19:24.531820   13364 network_create.go:243] error running [docker network inspect custom-weave-20210310211916-6496]: docker network inspect custom-weave-20210310211916-6496: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: custom-weave-20210310211916-6496
	* I0310 21:19:24.531820   13364 network_create.go:245] output of [docker network inspect custom-weave-20210310211916-6496]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: custom-weave-20210310211916-6496
	* 
	* ** /stderr **
	* I0310 21:19:24.545208   13364 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 21:19:25.236662   13364 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 21:19:25.237110   13364 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: custom-weave-20210310211916-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 21:19:25.243983   13364 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210310211916-6496
	* W0310 21:19:25.780916   13364 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210310211916-6496 returned with exit code 1
	* W0310 21:19:25.782158   13364 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	* I0310 21:19:25.801237   13364 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 21:19:26.390354   13364 cli_runner.go:115] Run: docker volume create custom-weave-20210310211916-6496 --label name.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --label created_by.minikube.sigs.k8s.io=true
	* I0310 21:19:27.835487    8732 retry.go:31] will retry after 14.635568968s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:19:27.003490   13364 oci.go:102] Successfully created a docker volume custom-weave-20210310211916-6496
	* I0310 21:19:27.015739   13364 cli_runner.go:115] Run: docker run --rm --name custom-weave-20210310211916-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --entrypoint /usr/bin/test -v custom-weave-20210310211916-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 21:19:35.104840    7648 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (50.4180654s)
	* I0310 21:19:35.104840    7648 ssh_runner.go:100] rm: /preloaded.tar.lz4
	* I0310 21:19:34.628575   18444 pod_ready.go:97] pod "coredns-74ff55c5b-4w6mn" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:07:05 +0000 GMT Reason: Message:}
	* I0310 21:19:34.628575   18444 pod_ready.go:62] duration metric: took 2m44.1338103s to run WaitForPodReadyByLabel for pod with "kube-dns" label in "kube-system" namespace ...
	* I0310 21:19:34.628575   18444 pod_ready.go:59] waiting 4m0s for pod with "etcd" label in "kube-system" namespace to be Ready ...
	* I0310 21:19:34.831294   18444 pod_ready.go:97] pod "etcd-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:03:32 +0000 GMT Reason: Message:}
	* I0310 21:19:34.831697   18444 pod_ready.go:62] duration metric: took 203.1218ms to run WaitForPodReadyByLabel for pod with "etcd" label in "kube-system" namespace ...
	* I0310 21:19:34.831697   18444 pod_ready.go:59] waiting 4m0s for pod with "kube-apiserver" label in "kube-system" namespace to be Ready ...
	* I0310 21:19:35.216363   18444 pod_ready.go:97] pod "kube-apiserver-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:01:39 +0000 GMT Reason: Message:}
	* I0310 21:19:35.216363   18444 pod_ready.go:62] duration metric: took 384.6666ms to run WaitForPodReadyByLabel for pod with "kube-apiserver" label in "kube-system" namespace ...
	* I0310 21:19:35.216363   18444 pod_ready.go:59] waiting 4m0s for pod with "kube-controller-manager" label in "kube-system" namespace to be Ready ...
	* I0310 21:19:36.282262   18444 pod_ready.go:97] pod "kube-controller-manager-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:02:21 +0000 GMT Reason: Message:}
	* I0310 21:19:36.282262   18444 pod_ready.go:62] duration metric: took 1.0659003s to run WaitForPodReadyByLabel for pod with "kube-controller-manager" label in "kube-system" namespace ...
	* I0310 21:19:36.282262   18444 pod_ready.go:59] waiting 4m0s for pod with "kube-proxy" label in "kube-system" namespace to be Ready ...
	* I0310 21:19:31.907812   13364 cli_runner.go:168] Completed: docker run --rm --name custom-weave-20210310211916-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --entrypoint /usr/bin/test -v custom-weave-20210310211916-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (4.8919644s)
	* I0310 21:19:31.907920   13364 oci.go:106] Successfully prepared a docker volume custom-weave-20210310211916-6496
	* I0310 21:19:31.908325   13364 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:19:31.909054   13364 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:19:31.909189   13364 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 21:19:31.919197   13364 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:19:31.919975   13364 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210310211916-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* W0310 21:19:32.633517   13364 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210310211916-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	* I0310 21:19:32.633517   13364 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210310211916-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	* stdout:
	* 
	* stderr:
	* docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	* 
	* The notification platform is unavailable.
	* 	���
	* 
	* ���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	*    at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	* �������?8
	* CreateToastNotifier
	* Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	* Windows.UI.Notifications.ToastNotificationManager
	* Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	* ���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	* ���+The notification platform is unavailable.
	* 	������������RestrictedErrorReference
	* 	
���
���������RestrictedCapabilitySid
	* 	������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	* See 'docker run --help'.
	* I0310 21:19:32.931561   13364 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0123655s)
	* I0310 21:19:32.931891   13364 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:19:32.4721638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:19:32.941872   13364 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 21:19:33.895863   13364 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210310211916-6496 --name custom-weave-20210310211916-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --volume custom-weave-20210310211916-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 21:19:36.735853    7648 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:19:36.794093    7648 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	* I0310 21:19:37.058988    7648 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:19:38.142432    7648 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0834458s)
	* I0310 21:19:38.153657    7648 ssh_runner.go:149] Run: sudo systemctl restart docker
	* I0310 21:19:36.934987   18444 pod_ready.go:97] pod "kube-proxy-p6jnj" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:05:51 +0000 GMT Reason: Message:}
	* I0310 21:19:36.935269   18444 pod_ready.go:62] duration metric: took 653.0075ms to run WaitForPodReadyByLabel for pod with "kube-proxy" label in "kube-system" namespace ...
	* I0310 21:19:36.935269   18444 pod_ready.go:59] waiting 4m0s for pod with "kube-scheduler" label in "kube-system" namespace to be Ready ...
	* I0310 21:19:37.212454   18444 pod_ready.go:97] pod "kube-scheduler-embed-certs-20210310205017-6496" in "kube-system" namespace is Ready: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:01:35 +0000 GMT Reason: Message:}
	* I0310 21:19:37.212454   18444 pod_ready.go:62] duration metric: took 277.1856ms to run WaitForPodReadyByLabel for pod with "kube-scheduler" label in "kube-system" namespace ...
	* I0310 21:19:37.212454   18444 pod_ready.go:39] duration metric: took 2m46.718004s for extra waiting for kube-system core pods to be Ready ...
	* I0310 21:19:37.212454   18444 api_server.go:48] waiting for apiserver process to appear ...
	* I0310 21:19:37.223334   18444 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	* I0310 21:19:39.773679   18444 ssh_runner.go:189] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.5503486s)
	* I0310 21:19:39.773679   18444 api_server.go:68] duration metric: took 2.5612286s to wait for apiserver process to appear ...
	* I0310 21:19:39.773679   18444 api_server.go:84] waiting for apiserver healthz status ...
	* I0310 21:19:39.773679   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:40.429243   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:40.430008   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:40.930915   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:38.229689   13364 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210310211916-6496 --name custom-weave-20210310211916-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --volume custom-weave-20210310211916-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (4.3335497s)
	* I0310 21:19:38.241967   13364 cli_runner.go:115] Run: docker container inspect custom-weave-20210310211916-6496 --format=
	* I0310 21:19:38.858868   13364 cli_runner.go:115] Run: docker container inspect custom-weave-20210310211916-6496 --format=
	* I0310 21:19:39.421271   13364 cli_runner.go:115] Run: docker exec custom-weave-20210310211916-6496 stat /var/lib/dpkg/alternatives/iptables
	* I0310 21:19:40.485437   13364 cli_runner.go:168] Completed: docker exec custom-weave-20210310211916-6496 stat /var/lib/dpkg/alternatives/iptables: (1.064168s)
	* I0310 21:19:40.485437   13364 oci.go:278] the created container "custom-weave-20210310211916-6496" has a running status.
	* I0310 21:19:40.486283   13364 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa...
	* I0310 21:19:40.889920   13364 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 21:19:42.043920   16712 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (41.3322272s)
	* I0310 21:19:42.043920   16712 ssh_runner.go:100] rm: /preloaded.tar.lz4
	* I0310 21:19:42.486090    8732 retry.go:31] will retry after 28.406662371s: Get "https://127.0.0.1:55195/api/v1/namespaces/kube-system/pods": EOF
	* I0310 21:19:44.903376    7648 ssh_runner.go:189] Completed: sudo systemctl restart docker: (6.7497275s)
	* I0310 21:19:44.913905    7648 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:19:45.911762    7648 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 21:19:45.911965    7648 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 21:19:45.919309    7648 ssh_runner.go:149] Run: docker info --format 
	* I0310 21:19:41.627200   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:41.627321   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:41.931040   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:42.022519   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:42.022519   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:42.430808   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:42.946260   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:42.947201   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:43.430619   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:44.610312   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:44.610312   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:44.931360   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:45.263335   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:45.263668   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:45.431546   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:46.048370   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:46.048370   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:46.430636   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:42.581972   13364 cli_runner.go:115] Run: docker container inspect custom-weave-20210310211916-6496 --format=
	* I0310 21:19:43.189145   13364 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	* I0310 21:19:43.189252   13364 kic_runner.go:115] Args: [docker exec --privileged custom-weave-20210310211916-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	* I0310 21:19:44.841461   13364 kic_runner.go:124] Done: [docker exec --privileged custom-weave-20210310211916-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.6522103s)
	* I0310 21:19:44.841461   13364 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa...
	* I0310 21:19:45.703061   13364 cli_runner.go:115] Run: docker container inspect custom-weave-20210310211916-6496 --format=
	* I0310 21:19:46.275015   13364 machine.go:88] provisioning docker machine ...
	* I0310 21:19:46.275015   13364 ubuntu.go:169] provisioning hostname "custom-weave-20210310211916-6496"
	* I0310 21:19:46.284451   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:19:46.600720   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: (44.0680779s)
	* I0310 21:19:46.600720   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 from cache
	* I0310 21:19:46.600955   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:19:46.616637   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:19:43.937424   16712 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:19:43.993603   16712 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	* I0310 21:19:44.143940   16712 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:19:45.194715   16712 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0507764s)
	* I0310 21:19:45.207579   16712 ssh_runner.go:149] Run: sudo systemctl restart docker
	* I0310 21:19:47.858497    7648 ssh_runner.go:189] Completed: docker info --format : (1.9391903s)
	* I0310 21:19:47.858497    7648 cni.go:74] Creating CNI manager for "cilium"
	* I0310 21:19:47.858497    7648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 21:19:47.858497    7648 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20210310211546-6496 NodeName:cilium-20210310211546-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 21:19:47.858497    7648 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 172.17.0.2
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "cilium-20210310211546-6496"
	*   kubeletExtraArgs:
	*     node-ip: 172.17.0.2
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 21:19:47.858497    7648 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cilium-20210310211546-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=172.17.0.2
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:cilium-20210310211546-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	* I0310 21:19:47.877206    7648 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 21:19:48.018730    7648 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 21:19:48.031463    7648 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 21:19:48.143059    7648 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	* I0310 21:19:48.404411    7648 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 21:19:48.698866    7648 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1852 bytes)
	* I0310 21:19:49.196109    7648 ssh_runner.go:149] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
	* I0310 21:19:49.244902    7648 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:19:49.470157    7648 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496 for IP: 172.17.0.2
	* I0310 21:19:49.471061    7648 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 21:19:49.471453    7648 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 21:19:49.472272    7648 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\client.key
	* I0310 21:19:49.472489    7648 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key.7b749c5f
	* I0310 21:19:49.472489    7648 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
	* I0310 21:19:49.919364    7648 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt.7b749c5f ...
	* I0310 21:19:49.919690    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt.7b749c5f: {Name:mk97460ca42861b0d6a09ba14b19b51fe8ab3377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:19:49.939818    7648 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key.7b749c5f ...
	* I0310 21:19:49.940908    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key.7b749c5f: {Name:mk94cc14d9783b08a03c5b374844574766387a4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:19:49.956255    7648 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt.7b749c5f -> C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt
	* I0310 21:19:49.961134    7648 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key.7b749c5f -> C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key
	* I0310 21:19:49.964364    7648 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.key
	* I0310 21:19:49.964364    7648 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.crt with IP's: []
	* I0310 21:19:50.187942    7648 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.crt ...
	* I0310 21:19:50.187942    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.crt: {Name:mkb3448c74da2f7fe3a703692cc6730ccf6431f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:19:50.203759    7648 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.key ...
	* I0310 21:19:50.203759    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.key: {Name:mkf4cd66eceb549aaf0c768a7cee083da4c92e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:19:50.226890    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 21:19:50.227541    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.227747    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 21:19:50.227747    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.227747    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 21:19:50.227747    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.227747    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 21:19:50.229146    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.229621    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 21:19:50.230246    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.230545    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 21:19:50.231328    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.231911    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 21:19:50.232799    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.233388    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 21:19:50.233966    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.235163    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 21:19:50.235163    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.235900    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 21:19:50.235900    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.235900    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 21:19:50.236997    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.236997    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 21:19:50.236997    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.237758    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 21:19:50.237758    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.237758    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 21:19:50.238350    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.238626    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 21:19:50.238626    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.238626    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 21:19:50.239487    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.239487    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 21:19:50.239487    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.239487    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 21:19:50.240474    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.240474    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 21:19:50.240474    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.240474    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 21:19:50.241390    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.241390    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 21:19:50.241390    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.241390    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 21:19:50.242247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.242247    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 21:19:50.242247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.242247    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 21:19:50.242247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.243245    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 21:19:50.243245    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.243245    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 21:19:50.248272    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.248272    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 21:19:50.249247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.249247    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 21:19:50.249247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.249247    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 21:19:50.249247    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.250254    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 21:19:50.250254    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.250254    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 21:19:50.250254    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.250254    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 21:19:50.251230    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.251230    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 21:19:50.251230    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.251230    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 21:19:50.251230    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.251230    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 21:19:50.252246    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.252246    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 21:19:50.252246    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.252246    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 21:19:50.252246    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.253272    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 21:19:50.253272    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.253272    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 21:19:50.253272    7648 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:50.253272    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 21:19:50.254262    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 21:19:50.254262    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 21:19:50.254262    7648 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 21:19:50.261236    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 21:19:50.455721    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	* I0310 21:19:50.822265    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 21:19:51.263035    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\cilium-20210310211546-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	* I0310 21:19:47.461568   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:47.461952   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:47.931190   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:48.606053   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:48.606267   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:48.930898   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:49.854335   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:49.854335   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:49.931570   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:50.490410   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:50.490410   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:50.931009   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:51.332886   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:51.332886   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:51.430742   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:46.882800   13364 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:19:46.883929   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	* I0310 21:19:46.883929   13364 main.go:121] libmachine: About to run SSH command:
	* sudo hostname custom-weave-20210310211916-6496 && echo "custom-weave-20210310211916-6496" | sudo tee /etc/hostname
	* I0310 21:19:47.997393   13364 main.go:121] libmachine: SSH cmd err, output: <nil>: custom-weave-20210310211916-6496
	* 
	* I0310 21:19:48.009317   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:19:48.742115   13364 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:19:48.743072   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	* I0310 21:19:48.743072   13364 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\scustom-weave-20210310211916-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20210310211916-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 custom-weave-20210310211916-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:19:49.600213   13364 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:19:49.600213   13364 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:19:49.600213   13364 ubuntu.go:177] setting up certificates
	* I0310 21:19:49.600213   13364 provision.go:83] configureAuth start
	* I0310 21:19:49.615307   13364 cli_runner.go:115] Run: docker container inspect -f "" custom-weave-20210310211916-6496
	* I0310 21:19:50.312485   13364 provision.go:137] copyHostCerts
	* I0310 21:19:50.313336   13364 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:19:50.313518   13364 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:19:50.313955   13364 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:19:50.317952   13364 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:19:50.318097   13364 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:19:50.318897   13364 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:19:50.324526   13364 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:19:50.324526   13364 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:19:50.324869   13364 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:19:50.326695   13364 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.custom-weave-20210310211916-6496 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20210310211916-6496]
	* I0310 21:19:50.473545   13364 provision.go:165] copyRemoteCerts
	* I0310 21:19:50.491383   13364 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:19:50.498028   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:19:51.157275   13364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55203 SSHKeyPath:C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa Username:docker}
	* I0310 21:19:51.810069   13364 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3186869s)
	* I0310 21:19:51.810747   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	* I0310 21:19:51.117553   16712 ssh_runner.go:189] Completed: sudo systemctl restart docker: (5.9092396s)
	* I0310 21:19:51.128643   16712 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:19:52.330660   16712 ssh_runner.go:189] Completed: docker images --format :: (1.2018301s)
	* I0310 21:19:52.330660   16712 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 21:19:52.330660   16712 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 21:19:52.337877   16712 ssh_runner.go:149] Run: docker info --format 
	* I0310 21:19:51.492590    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 21:19:51.866394    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 21:19:52.120932    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 21:19:52.398978    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 21:19:52.665642    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 21:19:52.883350    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 21:19:53.332245    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 21:19:53.565653    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 21:19:53.899134    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 21:19:54.193431    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 21:19:54.648821    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 21:19:55.076049    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 21:19:55.446863    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 21:19:55.847422    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 21:19:56.224809    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 21:19:51.954994   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:51.955684   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:52.441912   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:53.025872   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:53.026908   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:53.431176   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:52.426896   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 21:19:52.758745   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:19:53.246446   13364 provision.go:86] duration metric: configureAuth took 3.6462374s
	* I0310 21:19:53.246968   13364 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:19:53.259429   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:19:53.877314   13364 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:19:53.877988   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	* I0310 21:19:53.878264   13364 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:19:54.493154   13364 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:19:54.493154   13364 ubuntu.go:71] root file system type: overlay
	* I0310 21:19:54.493392   13364 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:19:54.503924   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:19:55.120891   13364 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:19:55.120891   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	* I0310 21:19:55.120891   13364 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:19:55.747601   13364 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:19:55.757611   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:19:56.382177   13364 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:19:56.383184   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	* I0310 21:19:56.383184   13364 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 21:19:54.368837   16712 ssh_runner.go:189] Completed: docker info --format : (2.0309624s)
	* I0310 21:19:54.369562   16712 cni.go:74] Creating CNI manager for "calico"
	* I0310 21:19:54.369855   16712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 21:19:54.369855   16712 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.6 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210310211603-6496 NodeName:calico-20210310211603-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 21:19:54.369855   16712 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 172.17.0.6
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "calico-20210310211603-6496"
	*   kubeletExtraArgs:
	*     node-ip: 172.17.0.6
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "172.17.0.6"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 21:19:54.370719   16712 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20210310211603-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=172.17.0.6
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:calico-20210310211603-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	* I0310 21:19:54.379253   16712 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 21:19:54.436531   16712 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 21:19:54.445942   16712 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 21:19:54.518935   16712 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	* I0310 21:19:54.706292   16712 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 21:19:54.958047   16712 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1852 bytes)
	* I0310 21:19:55.289581   16712 ssh_runner.go:149] Run: grep 172.17.0.6	control-plane.minikube.internal$ /etc/hosts
	* I0310 21:19:55.327265   16712 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:19:55.495711   16712 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496 for IP: 172.17.0.6
	* I0310 21:19:55.496214   16712 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 21:19:55.496509   16712 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 21:19:55.497355   16712 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\client.key
	* I0310 21:19:55.497508   16712 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key.76cb2290
	* I0310 21:19:55.497694   16712 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt.76cb2290 with IP's: [172.17.0.6 10.96.0.1 127.0.0.1 10.0.0.1]
	* I0310 21:19:55.792262   16712 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt.76cb2290 ...
	* I0310 21:19:55.792262   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt.76cb2290: {Name:mke5a5e76e2d0405f71701af873c36a9bc85f9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:19:55.820149   16712 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key.76cb2290 ...
	* I0310 21:19:55.820604   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key.76cb2290: {Name:mkd60630b72c66392c81449169044de0e81f328e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:19:55.837189   16712 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt.76cb2290 -> C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt
	* I0310 21:19:55.852663   16712 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key.76cb2290 -> C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key
	* I0310 21:19:55.872948   16712 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.key
	* I0310 21:19:55.873173   16712 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.crt with IP's: []
	* I0310 21:19:56.036709   16712 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.crt ...
	* I0310 21:19:56.036709   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.crt: {Name:mk47db70217d54bbafe7f833523f7957f5855eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:19:56.051627   16712 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.key ...
	* I0310 21:19:56.051627   16712 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.key: {Name:mk6d09b755ec249d4adc9ea1ab0c1b7b92be716b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:19:56.066600   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 21:19:56.066600   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.066600   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 21:19:56.066600   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.066600   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 21:19:56.066600   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.066600   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 21:19:56.072018   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.072018   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 21:19:56.072018   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.072743   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 21:19:56.072743   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.072743   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 21:19:56.073655   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.073655   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 21:19:56.073655   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.073655   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 21:19:56.073655   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.074590   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 21:19:56.074998   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.074998   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 21:19:56.074998   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.075596   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 21:19:56.075596   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.075596   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 21:19:56.075596   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.075596   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 21:19:56.076597   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.076597   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 21:19:56.076597   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.077326   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 21:19:56.077609   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.077609   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 21:19:56.077609   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.077609   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 21:19:56.078598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.078598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 21:19:56.078598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.078598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 21:19:56.078598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.078598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 21:19:56.079715   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.079715   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 21:19:56.079715   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.079715   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 21:19:56.080598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.080598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 21:19:56.080598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.080598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 21:19:56.080598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.080598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 21:19:56.081886   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.081886   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 21:19:56.082598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.082598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 21:19:56.082598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.082598   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 21:19:56.082598   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.083592   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 21:19:56.083592   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.083592   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 21:19:56.083592   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.083592   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 21:19:56.084614   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.084614   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 21:19:56.084614   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.084614   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 21:19:56.085591   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.085591   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 21:19:56.085591   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.085591   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 21:19:56.085591   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.085591   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 21:19:56.086826   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.086826   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 21:19:56.086826   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.086826   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 21:19:56.087599   16712 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 21:19:56.087599   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 21:19:56.087599   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 21:19:56.087599   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 21:19:56.088591   16712 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 21:19:56.095601   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 21:19:56.279938   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	* I0310 21:19:56.509637   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 21:19:56.683385   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\calico-20210310211603-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	* I0310 21:19:56.904608   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 21:19:57.152633   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 21:19:57.381990   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 21:19:57.881175   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 21:19:58.136733   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 21:19:58.386545   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 21:19:58.655152   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 21:19:56.725130    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 21:19:56.988343    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 21:19:57.179157    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 21:19:57.821432    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 21:19:58.012653    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 21:19:58.446641    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 21:19:58.619909    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 21:19:59.040186    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 21:19:59.381072    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 21:19:59.674841    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 21:19:59.993784    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 21:20:00.181038    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 21:20:00.423311    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 21:20:00.549863    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 21:20:00.753493    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 21:20:00.979257    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 21:20:01.198595    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 21:19:56.691012   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:56.691012   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:56.930703   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:57.986508   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* W0310 21:19:57.986508   18444 api_server.go:99] status: https://127.0.0.1:55180/healthz returned error 500:
	* [+]ping ok
	* [+]log ok
	* [+]etcd ok
	* [+]poststarthook/start-kube-apiserver-admission-initializer ok
	* [+]poststarthook/generic-apiserver-start-informers ok
	* [+]poststarthook/priority-and-fairness-config-consumer ok
	* [+]poststarthook/priority-and-fairness-filter ok
	* [+]poststarthook/start-apiextensions-informers ok
	* [+]poststarthook/start-apiextensions-controllers ok
	* [+]poststarthook/crd-informer-synced ok
	* [+]poststarthook/bootstrap-controller ok
	* [-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	* [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	* [+]poststarthook/priority-and-fairness-config-producer ok
	* [+]poststarthook/start-cluster-authentication-info-controller ok
	* [+]poststarthook/aggregator-reload-proxy-client-cert ok
	* [+]poststarthook/start-kube-aggregator-informers ok
	* [+]poststarthook/apiservice-registration-controller ok
	* [+]poststarthook/apiservice-status-available-controller ok
	* [+]poststarthook/kube-apiserver-autoregistration ok
	* [+]autoregister-completion ok
	* [+]poststarthook/apiservice-openapi-controller ok
	* healthz check failed
	* I0310 21:19:58.432704   18444 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55180/healthz ...
	* I0310 21:19:59.444802   18444 api_server.go:241] https://127.0.0.1:55180/healthz returned 200:
	* ok
	* I0310 21:19:59.542663   18444 api_server.go:137] control plane version: v1.20.2
	* I0310 21:19:59.542663   18444 api_server.go:127] duration metric: took 19.7690093s to wait for apiserver health ...
	* I0310 21:19:59.542911   18444 cni.go:74] Creating CNI manager for ""
	* I0310 21:19:59.542911   18444 cni.go:140] CNI unnecessary in this configuration, recommending no CNI
	* I0310 21:19:59.542911   18444 system_pods.go:41] waiting for kube-system pods to appear ...
	* I0310 21:19:59.994482   18444 system_pods.go:57] 7 kube-system pods found
	* I0310 21:19:59.994482   18444 system_pods.go:59] "coredns-74ff55c5b-4w6mn" [0b339996-09da-4e8b-82cb-967e22a2b12a] Running
	* I0310 21:19:59.994482   18444 system_pods.go:59] "etcd-embed-certs-20210310205017-6496" [f5043b9b-833a-4260-9106-ceecd7868ac4] Running
	* I0310 21:19:59.994482   18444 system_pods.go:59] "kube-apiserver-embed-certs-20210310205017-6496" [2caeba21-12bc-4e46-9383-776709339a99] Running
	* I0310 21:19:59.994482   18444 system_pods.go:59] "kube-controller-manager-embed-certs-20210310205017-6496" [f21834cd-7a9e-4aa5-b349-41acc025428d] Running
	* I0310 21:19:59.994482   18444 system_pods.go:59] "kube-proxy-p6jnj" [b4673698-b2df-494d-8de6-1008fa8348af] Running
	* I0310 21:19:59.994482   18444 system_pods.go:59] "kube-scheduler-embed-certs-20210310205017-6496" [fc2f78fc-009b-4a87-accf-3e42164fb38e] Running
	* I0310 21:19:59.994482   18444 system_pods.go:59] "storage-provisioner" [2659761d-6d3f-43ea-b1d9-04ec50811e6f] Running
	* I0310 21:19:59.994482   18444 system_pods.go:72] duration metric: took 451.5717ms to wait for pod list to return data ...
	* I0310 21:19:59.994482   18444 node_conditions.go:101] verifying NodePressure condition ...
	* I0310 21:20:00.241941   18444 node_conditions.go:121] node storage ephemeral capacity is 65792556Ki
	* I0310 21:20:00.242235   18444 node_conditions.go:122] node cpu capacity is 4
	* I0310 21:20:00.242427   18444 node_conditions.go:104] duration metric: took 247.9447ms to run NodePressure ...
	* I0310 21:20:00.242427   18444 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	* I0310 21:19:58.812971   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 21:19:59.246404   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 21:19:59.579082   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 21:19:59.811144   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 21:20:00.034942   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 21:20:00.397080   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 21:20:00.669933   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 21:20:00.895851   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 21:20:01.166832   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 21:20:01.397656   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 21:20:01.674341   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 21:20:01.882764   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 21:20:02.096289   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 21:20:02.302118   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 21:20:02.549354   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 21:20:02.810313   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 21:20:03.104695   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 21:20:03.356282   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 21:20:03.527855   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 21:20:01.397227    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 21:20:01.561969    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 21:20:01.880499    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 21:20:02.105521    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 21:20:02.368274    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 21:20:02.573574    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 21:20:02.860938    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 21:20:03.187229    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 21:20:03.486889    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 21:20:03.753838    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 21:20:03.942909    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 21:20:04.282794    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 21:20:04.490657    7648 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 21:20:04.666798    7648 ssh_runner.go:149] Run: openssl version
	* I0310 21:20:04.721768    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 21:20:04.791917    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 21:20:04.864431    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 21:20:04.874642    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 21:20:04.941496    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:05.022355    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 21:20:05.097657    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 21:20:05.134383    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 21:20:05.148888    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 21:20:05.202235    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:05.279386    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 21:20:05.385810    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 21:20:05.425481    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 21:20:05.438789    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 21:20:05.511414    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:05.571127    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 21:20:05.683987    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 21:20:05.719269    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 21:20:05.732146    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 21:20:05.778833    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:05.853468    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 21:20:05.955399    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 21:20:06.025691    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 21:20:06.048563    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 21:20:06.092612    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:06.178977    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 21:20:03.828789   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 21:20:04.016658   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 21:20:04.320386   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 21:20:04.565188   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 21:20:04.906484   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 21:20:05.179192   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 21:20:05.491335   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 21:20:05.818818   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 21:20:06.133703   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 21:20:06.385266   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 21:20:06.832303   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 21:20:07.146678   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 21:20:07.574957   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 21:20:07.877048   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 21:20:08.123859   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 21:20:08.493171   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 21:20:06.348671    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 21:20:06.388722    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 21:20:06.407608    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 21:20:06.455960    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:06.539330    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 21:20:06.616668    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 21:20:06.679707    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 21:20:06.689708    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 21:20:06.732635    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:06.807564    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 21:20:06.914267    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 21:20:06.944082    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 21:20:06.960447    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 21:20:07.040502    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:07.122274    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 21:20:07.248099    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 21:20:07.290740    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 21:20:07.301914    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 21:20:07.389647    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:07.452924    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 21:20:07.509124    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 21:20:07.559566    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 21:20:07.574957    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 21:20:07.650923    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:07.730693    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 21:20:07.788137    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 21:20:07.827537    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 21:20:07.837863    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 21:20:07.954514    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:08.055834    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 21:20:08.131202    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 21:20:08.165624    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 21:20:08.181243    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 21:20:08.257288    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:08.338349    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 21:20:08.428444    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:20:08.503079    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:20:08.518351    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:20:08.604605    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 21:20:08.715698    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 21:20:08.817071    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 21:20:08.868403    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 21:20:08.884564    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 21:20:08.932484    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:10.180220    7648 ssh_runner.go:189] Completed: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0": (1.2477367s)
	* I0310 21:20:10.190473    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 21:20:10.268771    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 21:20:10.338151    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 21:20:10.355278    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 21:20:10.425971    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:10.497019    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 21:20:10.653124    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 21:20:10.686481    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 21:20:10.708671    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 21:20:10.880254    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:10.963456    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 21:20:11.068423    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 21:20:11.145427    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 21:20:11.174316    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 21:20:11.270079    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:06.996088   13364 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 21:19:55.737608000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 21:20:06.996088   13364 machine.go:91] provisioned docker machine in 20.7210994s
	* I0310 21:20:06.996088   13364 client.go:171] LocalClient.Create took 43.6692034s
	* I0310 21:20:06.996475   13364 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20210310211916-6496" took 43.6695903s
	* I0310 21:20:06.996475   13364 start.go:267] post-start starting for "custom-weave-20210310211916-6496" (driver="docker")
	* I0310 21:20:06.996475   13364 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:20:07.012309   13364 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:20:07.020843   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:20:07.619668   13364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55203 SSHKeyPath:C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa Username:docker}
	* I0310 21:20:08.103502   13364 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0911938s)
	* I0310 21:20:08.119158   13364 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:20:08.155271   13364 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:20:08.155768   13364 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:20:08.155768   13364 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:20:08.155768   13364 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:20:08.155768   13364 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:20:08.156219   13364 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:20:08.159672   13364 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:20:08.160847   13364 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:20:08.178599   13364 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:20:08.266485   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:20:08.478653   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:20:08.759287   13364 start.go:270] post-start completed in 1.7628145s
	* I0310 21:20:08.788795   13364 cli_runner.go:115] Run: docker container inspect -f "" custom-weave-20210310211916-6496
	* I0310 21:20:09.392585   13364 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\config.json ...
	* I0310 21:20:09.433218   13364 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:20:09.442030   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:20:10.035498   13364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55203 SSHKeyPath:C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa Username:docker}
	* I0310 21:20:10.473507   13364 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.0390943s)
	* I0310 21:20:10.473507   13364 start.go:129] duration metric: createHost completed in 47.1637532s
	* I0310 21:20:10.473507   13364 start.go:80] releasing machines lock for "custom-weave-20210310211916-6496", held for 47.1722949s
	* I0310 21:20:10.482408   13364 cli_runner.go:115] Run: docker container inspect -f "" custom-weave-20210310211916-6496
	* I0310 21:20:11.132975   13364 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:20:11.141675   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:20:11.144345   13364 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:20:11.163006   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:20:11.850599   13364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55203 SSHKeyPath:C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa Username:docker}
	* I0310 21:20:07.091206   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: (20.4745956s)
	* I0310 21:20:07.092206   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 from cache
	* I0310 21:20:07.092412   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:20:07.108291   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:20:08.889534   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 21:20:10.323778   16712 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 21:20:10.708671   16712 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 21:20:10.996850   16712 ssh_runner.go:149] Run: openssl version
	* I0310 21:20:11.169797   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 21:20:11.340484   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 21:20:11.393718   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 21:20:11.407849   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 21:20:11.504423   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:11.653825   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 21:20:11.757588   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 21:20:11.828973   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 21:20:11.839254   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 21:20:11.914128   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:12.044010   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 21:20:12.203988   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 21:20:12.250076   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 21:20:12.269621   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 21:20:12.361200   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:12.462367   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 21:20:12.729501   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 21:20:12.765962   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 21:20:12.779528   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 21:20:12.857026   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:12.965669   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 21:20:13.062179   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 21:20:13.084583   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 21:20:13.101480   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 21:20:13.161901   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:13.249568   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 21:20:13.328300   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 21:20:13.362343   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 21:20:13.382372   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 21:20:13.454731   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:13.527550   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 21:20:13.612026   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:20:13.649984   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:20:13.663938   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:20:11.867049   13364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55203 SSHKeyPath:C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa Username:docker}
	* I0310 21:20:12.387518   13364 ssh_runner.go:189] Completed: systemctl --version: (1.2431751s)
	* I0310 21:20:12.397950   13364 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:20:12.783932   13364 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6506415s)
	* I0310 21:20:12.799375   13364 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:20:12.980397   13364 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:20:12.993217   13364 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:20:13.101480   13364 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:20:13.425122   13364 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:20:13.541328   13364 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:20:14.720508   13364 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.1791807s)
	* I0310 21:20:14.730788   13364 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:20:14.918712   13364 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:20:16.108983   13364 ssh_runner.go:189] Completed: docker version --format : (1.1902732s)
	* I0310 21:20:11.418355    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 21:20:11.552920    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 21:20:11.604091    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 21:20:11.613004    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 21:20:11.711564    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:11.818975    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 21:20:11.953596    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 21:20:11.991904    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 21:20:12.011000    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 21:20:12.079047    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:12.139988    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 21:20:12.219518    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 21:20:12.257643    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 21:20:12.267694    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 21:20:12.360618    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:12.461836    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 21:20:12.708194    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 21:20:12.747527    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 21:20:12.761975    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 21:20:12.823969    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:12.930268    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 21:20:13.117178    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 21:20:13.164716    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 21:20:13.174821    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 21:20:13.211768    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:13.359363    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 21:20:13.510525    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 21:20:13.581956    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 21:20:13.591660    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 21:20:13.699954    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:13.785770    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 21:20:13.888111    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 21:20:13.943345    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 21:20:13.962144    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 21:20:14.027804    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:14.122373    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 21:20:14.216297    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 21:20:14.255824    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 21:20:14.274349    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 21:20:14.318274    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:14.478501    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 21:20:14.560514    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 21:20:14.586514    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 21:20:14.600017    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 21:20:14.743531    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:14.839899    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 21:20:14.993797    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 21:20:15.040030    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 21:20:15.058609    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 21:20:15.117345    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:15.243423    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 21:20:15.390045    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 21:20:15.417232    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 21:20:15.426720    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 21:20:15.554709    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:15.644683    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 21:20:15.699004    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 21:20:15.740189    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 21:20:15.745989    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 21:20:15.795869    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:15.907408    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 21:20:15.983438    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 21:20:16.056122    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 21:20:16.072079    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 21:20:16.155123    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:16.246747    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 21:20:16.113566   13364 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 21:20:16.127580   13364 cli_runner.go:115] Run: docker exec -t custom-weave-20210310211916-6496 dig +short host.docker.internal
	* I0310 21:20:13.759810   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 21:20:13.946237   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 21:20:14.181777   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 21:20:14.238213   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 21:20:14.247244   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 21:20:14.339002   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:14.420319   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 21:20:14.522899   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 21:20:14.594790   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 21:20:14.623690   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 21:20:14.687262   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:14.790209   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 21:20:14.887049   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 21:20:14.928800   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 21:20:14.940100   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 21:20:15.011996   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:15.112918   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 21:20:15.198153   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 21:20:15.237897   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 21:20:15.254719   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 21:20:15.309716   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:15.386622   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 21:20:15.449243   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 21:20:15.473701   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 21:20:15.486654   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 21:20:15.527681   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:15.612270   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 21:20:15.713769   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 21:20:15.736861   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 21:20:15.745989   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 21:20:15.804428   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:15.912426   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 21:20:15.985597   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 21:20:16.014245   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 21:20:16.024030   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 21:20:16.072079   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:16.158365   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 21:20:16.266969   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 21:20:16.349464   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 21:20:16.370871   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 21:20:16.468752   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:16.585954   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 21:20:16.727172   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 21:20:16.768426   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 21:20:16.786577   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 21:20:16.846579   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:16.926701   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 21:20:17.089764   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 21:20:17.151070   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 21:20:17.162549   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 21:20:17.239208   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:17.337772   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 21:20:17.452625   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 21:20:17.492902   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 21:20:17.509922   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 21:20:17.566459   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:17.765694   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 21:20:17.963489   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 21:20:18.056678   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 21:20:18.066179   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 21:20:18.217973   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:18.494368   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 21:20:18.630104   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 21:20:16.360992    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 21:20:16.406468    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 21:20:16.418886    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 21:20:16.505780    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:16.622554    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 21:20:16.828342    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 21:20:16.859486    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 21:20:16.876564    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 21:20:16.996731    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:17.114042    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 21:20:17.203151    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 21:20:17.252121    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 21:20:17.265954    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 21:20:17.327616    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:17.466508    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 21:20:17.581704    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 21:20:17.613416    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 21:20:17.632278    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 21:20:17.693059    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:17.817068    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 21:20:18.061187    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 21:20:18.174132    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 21:20:18.183529    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 21:20:18.282180    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:18.455720    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 21:20:18.659785    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 21:20:18.732247    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 21:20:18.750371    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 21:20:18.889892    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:19.125387    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 21:20:19.269771    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 21:20:19.415084    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 21:20:19.440323    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 21:20:19.564493    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:19.782285    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 21:20:20.112355    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 21:20:20.160820    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 21:20:20.168813    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 21:20:20.240026    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:20.314488    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 21:20:20.491246    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 21:20:20.597173    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 21:20:20.607684    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 21:20:20.731636    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:20.895769    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 21:20:21.041819    7648 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 21:20:21.128679    7648 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 21:20:21.148815    7648 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 21:20:21.288807    7648 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:17.476228   13364 cli_runner.go:168] Completed: docker exec -t custom-weave-20210310211916-6496 dig +short host.docker.internal: (1.3486498s)
	* I0310 21:20:17.476699   13364 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:20:17.491933   13364 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:20:17.524899   13364 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:20:17.675510   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	* I0310 21:20:18.411673   13364 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\client.crt
	* I0310 21:20:18.418579   13364 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\client.key
	* I0310 21:20:18.426036   13364 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:20:18.426036   13364 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:20:18.434630   13364 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:20:19.415084   13364 docker.go:423] Got preloaded images: 
	* I0310 21:20:19.415084   13364 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	* I0310 21:20:19.428922   13364 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	* I0310 21:20:19.542898   13364 ssh_runner.go:149] Run: which lz4
	* I0310 21:20:19.667989   13364 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	* I0310 21:20:19.710149   13364 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	* stdout:
	* 
	* stderr:
	* stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	* I0310 21:20:19.710469   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	* I0310 21:20:18.725942   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 21:20:18.770755   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 21:20:18.883900   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:18.981936   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 21:20:19.225212   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 21:20:19.297172   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 21:20:19.306199   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 21:20:19.409480   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:19.554038   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 21:20:19.730533   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 21:20:19.808841   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 21:20:19.821857   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 21:20:20.042731   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:20.188342   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 21:20:20.356103   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 21:20:20.406331   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 21:20:20.421437   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 21:20:20.491246   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:20.601422   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 21:20:20.778482   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 21:20:20.866721   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 21:20:20.892630   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 21:20:21.075025   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:21.297596   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 21:20:21.714407   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 21:20:21.902985   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 21:20:21.914911   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 21:20:22.066554   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:22.215966   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 21:20:22.370362   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 21:20:22.435246   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 21:20:22.448269   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 21:20:22.556736   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:22.725032   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 21:20:22.985197   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 21:20:23.046525   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 21:20:23.057645   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 21:20:23.209811   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:23.458646   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 21:20:23.601791   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 21:20:21.456738    7648 kubeadm.go:385] StartCluster: {Name:cilium-20210310211546-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:cilium-20210310211546-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:20:21.467362    7648 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:20:23.099741    7648 ssh_runner.go:189] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=: (1.6323818s)
	* I0310 21:20:23.112013    7648 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 21:20:23.389322    7648 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 21:20:23.572039    7648 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 21:20:23.584120    7648 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 21:20:23.801736    7648 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 21:20:23.801736    7648 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 21:20:23.770557   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 21:20:23.800853   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 21:20:23.957510   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:24.190903   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 21:20:24.420922   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 21:20:24.484795   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 21:20:24.495748   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 21:20:24.594397   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:24.889417   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 21:20:25.062090   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 21:20:25.158783   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 21:20:25.174722   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 21:20:25.303963   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:25.455607   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 21:20:25.596894   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 21:20:25.723103   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 21:20:25.733645   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 21:20:25.890676   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:26.172853   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 21:20:26.407957   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 21:20:26.549726   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 21:20:26.566636   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 21:20:26.672620   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:26.823055   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 21:20:27.282105   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 21:20:27.411501   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 21:20:27.421784   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 21:20:27.538993   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:27.763574   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 21:20:27.951354   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 21:20:27.983962   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 21:20:27.993564   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 21:20:28.084983   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:28.208897   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 21:20:28.382501   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 21:20:28.448214   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 21:20:28.458244   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 21:20:28.649017   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:30.266769   18752 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: (23.1585086s)
	* I0310 21:20:30.267299   18752 cache_images.go:259] Transferred and loaded C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 from cache
	* I0310 21:20:30.267299   18752 docker.go:167] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:20:30.277367   18752 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:20:28.889687   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 21:20:29.173901   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 21:20:29.228275   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 21:20:29.240725   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 21:20:29.316757   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:29.410786   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 21:20:29.686761   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 21:20:29.738289   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 21:20:29.755809   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 21:20:29.844362   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:30.011586   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 21:20:30.187406   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 21:20:30.227919   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 21:20:30.237760   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 21:20:30.432978   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:30.570750   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 21:20:30.831275   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 21:20:30.883286   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 21:20:30.907566   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 21:20:31.014528   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:31.198152   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 21:20:31.468409   16712 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 21:20:31.523629   16712 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 21:20:31.534408   16712 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 21:20:31.772365   16712 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 21:20:31.910272   16712 kubeadm.go:385] StartCluster: {Name:calico-20210310211603-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:calico-20210310211603-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.6 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:20:31.920760   16712 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:20:34.024325   16712 ssh_runner.go:189] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=: (2.1029824s)
	* I0310 21:20:34.039937   16712 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 21:20:34.237423   16712 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 21:20:34.472139   16712 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 21:20:34.487892   16712 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 21:20:34.694647   16712 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 21:20:34.695105   16712 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:20:13.296773   18820 logs.go:183] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.5-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	E0310 21:20:29.471349   18820 out.go:340] unable to execute * 2021-03-10 21:15:27.824289 W | etcdserver: request "header:<ID:3266086224885917321 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-public/default\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-public/default\" value_size:114 >> failure:<>>" with result "size:16" took too long (202.7968ms) to execute
	: html/template:* 2021-03-10 21:15:27.824289 W | etcdserver: request "header:<ID:3266086224885917321 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-public/default\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-public/default\" value_size:114 >> failure:<>>" with result "size:16" took too long (202.7968ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:20:29.484925   18820 out.go:340] unable to execute * 2021-03-10 21:15:28.286353 W | etcdserver: request "header:<ID:3266086224885917322 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/configmaps/kube-public/kube-root-ca.crt\" mod_revision:0 > success:<request_put:<key:\"/registry/configmaps/kube-public/kube-root-ca.crt\" value_size:1342 >> failure:<>>" with result "size:16" took too long (426.935ms) to execute
	: html/template:* 2021-03-10 21:15:28.286353 W | etcdserver: request "header:<ID:3266086224885917322 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/configmaps/kube-public/kube-root-ca.crt\" mod_revision:0 > success:<request_put:<key:\"/registry/configmaps/kube-public/kube-root-ca.crt\" value_size:1342 >> failure:<>>" with result "size:16" took too long (426.935ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:20:29.532518   18820 out.go:340] unable to execute * 2021-03-10 21:15:29.984775 W | etcdserver: request "header:<ID:3266086224885917355 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:410 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:145 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >>" with result "size:16" took too long (158.2429ms) to execute
	: html/template:* 2021-03-10 21:15:29.984775 W | etcdserver: request "header:<ID:3266086224885917355 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:410 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:145 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >>" with result "size:16" took too long (158.2429ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:20:48.576535   18820 out.go:335] unable to parse "* I0310 21:19:18.466511   13364 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:19:18.466511   13364 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:20:48.591043   18820 out.go:335] unable to parse "* I0310 21:19:19.515678   13364 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0489584s)\n": template: * I0310 21:19:19.515678   13364 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0489584s)
	:1: function "json" not defined - returning raw string.
	E0310 21:20:48.656962   18820 out.go:335] unable to parse "* I0310 21:19:21.718817   13364 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:19:21.718817   13364 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:20:48.780190   18820 out.go:340] unable to execute * I0310 21:19:23.355621   13364 cli_runner.go:115] Run: docker network inspect custom-weave-20210310211916-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:19:23.355621   13364 cli_runner.go:115] Run: docker network inspect custom-weave-20210310211916-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:288: executing "* I0310 21:19:23.355621   13364 cli_runner.go:115] Run: docker network inspect custom-weave-20210310211916-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:48.787599   18820 out.go:340] unable to execute * W0310 21:19:23.945979   13364 cli_runner.go:162] docker network inspect custom-weave-20210310211916-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 21:19:23.945979   13364 cli_runner.go:162] docker network inspect custom-weave-20210310211916-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:283: executing "* W0310 21:19:23.945979   13364 cli_runner.go:162] docker network inspect custom-weave-20210310211916-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:48.868119   18820 out.go:340] unable to execute * I0310 21:19:24.545208   13364 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:19:24.545208   13364 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 21:19:24.545208   13364 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:48.982696   18820 out.go:335] unable to parse "* I0310 21:19:31.919197   13364 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:19:31.919197   13364 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:20:49.206930   18820 out.go:335] unable to parse "* I0310 21:19:32.931561   13364 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0123655s)\n": template: * I0310 21:19:32.931561   13364 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0123655s)
	:1: function "json" not defined - returning raw string.
	E0310 21:20:49.225529   18820 out.go:335] unable to parse "* I0310 21:19:32.941872   13364 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 21:19:32.941872   13364 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 21:20:50.765343   18820 out.go:340] unable to execute * I0310 21:19:46.284451   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:19:46.284451   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:19:46.284451   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:52.537875   18820 out.go:335] unable to parse "* I0310 21:19:46.883929   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}\n": template: * I0310 21:19:46.883929   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:20:52.558127   18820 out.go:340] unable to execute * I0310 21:19:48.009317   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:19:48.009317   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:19:48.009317   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:52.569766   18820 out.go:335] unable to parse "* I0310 21:19:48.743072   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}\n": template: * I0310 21:19:48.743072   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:20:52.674309   18820 out.go:340] unable to execute * I0310 21:19:50.498028   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:19:50.498028   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:19:50.498028   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:53.203284   18820 out.go:340] unable to execute * I0310 21:19:53.259429   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:19:53.259429   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:19:53.259429   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:53.218507   18820 out.go:335] unable to parse "* I0310 21:19:53.877988   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}\n": template: * I0310 21:19:53.877988   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:20:53.248076   18820 out.go:340] unable to execute * I0310 21:19:54.503924   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:19:54.503924   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:19:54.503924   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:53.264105   18820 out.go:335] unable to parse "* I0310 21:19:55.120891   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}\n": template: * I0310 21:19:55.120891   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:20:53.677255   18820 out.go:340] unable to execute * I0310 21:19:55.757611   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:19:55.757611   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:19:55.757611   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:53.687254   18820 out.go:335] unable to parse "* I0310 21:19:56.383184   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}\n": template: * I0310 21:19:56.383184   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:20:55.840099   18820 out.go:340] unable to execute * I0310 21:20:07.020843   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:20:07.020843   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:20:07.020843   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:55.927142   18820 out.go:340] unable to execute * I0310 21:20:09.442030   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:20:09.442030   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:20:09.442030   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:55.980101   18820 out.go:340] unable to execute * I0310 21:20:11.141675   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:20:11.141675   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:20:11.141675   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:55.993438   18820 out.go:340] unable to execute * I0310 21:20:11.163006   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:20:11.163006   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:20:11.163006   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:20:56.884878   18820 out.go:340] unable to execute * I0310 21:20:17.675510   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	: template: * I0310 21:20:17.675510   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	:1:96: executing "* I0310 21:20:17.675510   13364 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" custom-weave-20210310211916-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:245: failed logs error: exit status 110
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (265.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (808.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context default-k8s-different-port-20210310205202-6496 create -f testdata\busybox.yaml
start_stop_delete_test.go:164: (dbg) Done: kubectl --context default-k8s-different-port-20210310205202-6496 create -f testdata\busybox.yaml: (4.5032686s)
start_stop_delete_test.go:164: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [ee642075-256f-4ee8-896d-23e79a3cd1a6] Pending

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:335: "busybox" [ee642075-256f-4ee8-896d-23e79a3cd1a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:335: "busybox" [ee642075-256f-4ee8-896d-23e79a3cd1a6] Running
start_stop_delete_test.go:164: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 7m5.273472s
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context default-k8s-different-port-20210310205202-6496 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:164: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210310205202-6496 exec busybox -- /bin/sh -c "ulimit -n": context deadline exceeded (0s)
start_stop_delete_test.go:164: ulimit: context deadline exceeded
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect default-k8s-different-port-20210310205202-6496
helpers_test.go:231: (dbg) docker inspect default-k8s-different-port-20210310205202-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63",
	        "Created": "2021-03-10T20:52:32.7671922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 234137,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:52:38.8381413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/hosts",
	        "LogPath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63-json.log",
	        "Name": "/default-k8s-different-port-20210310205202-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210310205202-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210310205202-6496",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210310205202-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210310205202-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210310205202-6496",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210310205202-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ae5c85c25a3c7d419e72b038d8b798aa18b164a3a6e0b21ae322faf61bd068b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55156"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55155"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55154"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55153"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7ae5c85c25a3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "7194d14c86ce93fa12517fc138a3f9fa7090df0ea11ec55d163012ce5bbfba6d",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.9",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:09",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "7194d14c86ce93fa12517fc138a3f9fa7090df0ea11ec55d163012ce5bbfba6d",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.9",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:09",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496: (8.5690626s)
helpers_test.go:240: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-different-port-20210310205202-6496 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-different-port-20210310205202-6496 logs -n 25: (2m3.6031209s)
helpers_test.go:248: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:52:50 UTC, end at Wed 2021-03-10 21:24:32 UTC. --
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.130829100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.131104300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.575176400Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.746203500Z" level=info msg="Loading containers: start."
	* Mar 10 20:56:33 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:33.001193200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 20:56:34 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:34.156088300Z" level=info msg="Loading containers: done."
	* Mar 10 20:56:34 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:34.887786200Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 20:56:34 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:34.887917600Z" level=info msg="Daemon has completed initialization"
	* Mar 10 20:56:35 default-k8s-different-port-20210310205202-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 20:56:35 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:35.920699200Z" level=info msg="API listen on [::]:2376"
	* Mar 10 20:56:36 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:36.104917000Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 21:00:22 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:00:22.007204800Z" level=info msg="ignoring event" container=92f2244695b68ec5eaa63f7b57f392115892164fdf6a258ac5efb8aeae302062 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:04:47 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:47.006756600Z" level=error msg="stream copy error: reading from a closed fifo"
	* Mar 10 21:04:47 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:47.014853400Z" level=error msg="stream copy error: reading from a closed fifo"
	* Mar 10 21:04:50 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:50.700968500Z" level=error msg="93932b9ba010da072ccf1f4a473432df0e38f09776fcf739444b815cc718eadc cleanup: failed to delete container from containerd: no such container"
	* Mar 10 21:04:50 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:50.701074500Z" level=error msg="Handler for POST /v1.40/containers/93932b9ba010da072ccf1f4a473432df0e38f09776fcf739444b815cc718eadc/start returned error: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: read init-p: connection reset by peer: unknown"
	* Mar 10 21:13:09 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:13:09.324341200Z" level=info msg="ignoring event" container=af28c036766178fda8a8c02586fbc343f12e3fea03619d61d1a225860811ea29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:17:48 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:17:48.332695600Z" level=info msg="ignoring event" container=c4f9eb4e103c512fa7f0880b5778e4450fffb041bd1aa1e73f14c271ebf1d6d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:19:03 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:19:03.230209300Z" level=error msg="Handler for GET /v1.40/containers/0740d9d8e0c7eb6f5e404e682cef855a73b5d22bdcea215760d41bdbfdbae962/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	* Mar 10 21:19:03 default-k8s-different-port-20210310205202-6496 dockerd[747]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	* Mar 10 21:19:15 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:19:15.841492100Z" level=info msg="ignoring event" container=0740d9d8e0c7eb6f5e404e682cef855a73b5d22bdcea215760d41bdbfdbae962 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:20:19 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:20:19.451164000Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout"
	* Mar 10 21:20:19 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:20:19.535335800Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout"
	* Mar 10 21:20:19 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:20:19.861492000Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout"
	* Mar 10 21:21:32 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:21:32.170500500Z" level=info msg="ignoring event" container=b1ac8f2ee561da93436846ae5a4a8495eac5f48b211e22ea2a65fb90987f1b3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 5be7b98539266       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1   2 minutes ago       Running             busybox                   0                   7ffcf84cf457b
	* d9ef8592e56c3       85069258b98ac                                                                     2 minutes ago       Running             storage-provisioner       3                   ea898f38edc88
	* b1ac8f2ee561d       85069258b98ac                                                                     5 minutes ago       Exited              storage-provisioner       2                   ea898f38edc88
	* 0ce70105ef45e       bfe3a36ebd252                                                                     19 minutes ago      Running             coredns                   0                   75a0f76d075a8
	* 6cc1ac0f08225       43154ddb57a83                                                                     19 minutes ago      Running             kube-proxy                0                   b2f343ffc28df
	* bf37cfa32c856       a27166429d98e                                                                     23 minutes ago      Running             kube-controller-manager   1                   656bdeac8baf4
	* cc170dc9a3a55       ed2c44fbdd78b                                                                     25 minutes ago      Running             kube-scheduler            0                   4e85db81a5fea
	* 92f2244695b68       a27166429d98e                                                                     25 minutes ago      Exited              kube-controller-manager   0                   656bdeac8baf4
	* 44043b6a8198a       a8c2fdb8bf76e                                                                     25 minutes ago      Running             kube-apiserver            0                   9c8384d7f0b51
	* 69efae781c0b4       0369cf4303ffd                                                                     25 minutes ago      Running             etcd                      0                   6ab27d612d9ec
	* 
	* ==> coredns [0ce70105ef45] <==
	* I0310 21:05:53.292911       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:32.2685215 +0000 UTC m=+1.508836001) (total time: 21.0144185s):
	* Trace[2019727887]: [21.0144185s] [21.0144185s] END
	* E0310 21:05:53.293004       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:05:53.293061       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:32.268399 +0000 UTC m=+1.508713501) (total time: 21.0250283s):
	* Trace[1427131847]: [21.0250283s] [21.0250283s] END
	* E0310 21:05:53.293070       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:05:53.293100       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:32.2658815 +0000 UTC m=+1.506196001) (total time: 21.0277142s):
	* Trace[939984059]: [21.0277142s] [21.0277142s] END
	* E0310 21:05:53.293107       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:06:15.187778       1 trace.go:116] Trace[336122540]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:54.1636976 +0000 UTC m=+23.405543501) (total time: 21.0224514s):
	* Trace[336122540]: [21.0224514s] [21.0224514s] END
	* E0310 21:06:15.189266       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:06:15.712274       1 trace.go:116] Trace[646203300]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:54.6928835 +0000 UTC m=+23.934729401) (total time: 21.0177612s):
	* Trace[646203300]: [21.0177612s] [21.0177612s] END
	* E0310 21:06:15.712320       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:06:15.712513       1 trace.go:116] Trace[1747278511]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:54.6917495 +0000 UTC m=+23.933595401) (total time: 21.0192089s):
	* Trace[1747278511]: [21.0192089s] [21.0192089s] END
	* E0310 21:06:15.712528       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210310205202-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=default-k8s-different-port-20210310205202-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=default-k8s-different-port-20210310205202-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T21_01_52_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 21:00:46 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  default-k8s-different-port-20210310205202-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 21:25:02 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 21:22:54 +0000   Wed, 10 Mar 2021 21:00:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 21:22:54 +0000   Wed, 10 Mar 2021 21:00:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 21:22:54 +0000   Wed, 10 Mar 2021 21:00:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 21:22:54 +0000   Wed, 10 Mar 2021 21:03:41 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.9
	*   Hostname:    default-k8s-different-port-20210310205202-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                08addf25-0ddf-4c24-98ff-7ed3332985b4
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (8 in total)
	*   Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	*   default                     busybox                                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	*   kube-system                 coredns-74ff55c5b-dqrb4                                                   100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     22m
	*   kube-system                 etcd-default-k8s-different-port-20210310205202-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         23m
	*   kube-system                 kube-apiserver-default-k8s-different-port-20210310205202-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         21m
	*   kube-system                 kube-controller-manager-default-k8s-different-port-20210310205202-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	*   kube-system                 kube-proxy-j2jg9                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	*   kube-system                 kube-scheduler-default-k8s-different-port-20210310205202-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         21m
	*   kube-system                 storage-provisioner                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age   From        Message
	*   ----    ------                   ----  ----        -------
	*   Normal  Starting                 22m   kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  22m   kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    22m   kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     22m   kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             22m   kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  21m   kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                21m   kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeReady
	*   Normal  Starting                 19m   kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [69efae781c0b] <==
	* 2021-03-10 21:23:25.226259 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:23:34.373349 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (108.2578ms) to execute
	* 2021-03-10 21:23:36.094015 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2314" took too long (370.2687ms) to execute
	* 2021-03-10 21:23:36.411361 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:23:44.900569 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:23:55.188889 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:05.119571 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:09.805659 I | mvcc: store.index: compact 901
	* 2021-03-10 21:24:09.850456 I | mvcc: finished scheduled compaction at 901 (took 6.076ms)
	* 2021-03-10 21:24:15.034722 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:24.958249 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:35.069274 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:45.309638 W | etcdserver: request "header:<ID:11303041234760732519 > lease_revoke:<id:1cdc781def818f32>" with result "size:28" took too long (128.6625ms) to execute
	* 2021-03-10 21:24:45.309962 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (129.2256ms) to execute
	* 2021-03-10 21:24:45.837441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:24:51.084988 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (152.9751ms) to execute
	* 2021-03-10 21:24:51.197606 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (133.4168ms) to execute
	* 2021-03-10 21:24:51.589519 W | etcdserver: request "header:<ID:11303041234760732539 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1069 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1056 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (187.5436ms) to execute
	* 2021-03-10 21:24:51.631947 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (292.1483ms) to execute
	* 2021-03-10 21:24:51.637708 W | etcdserver: read-only range request "key:\"/registry/masterleases/172.17.0.9\" " with result "range_response_count:1 size:129" took too long (196.4428ms) to execute
	* 2021-03-10 21:24:55.050983 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:25:02.437591 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true " with result "range_response_count:0 size:5" took too long (115.3415ms) to execute
	* 2021-03-10 21:25:02.457480 W | etcdserver: read-only range request "key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true " with result "range_response_count:0 size:7" took too long (122.7531ms) to execute
	* 2021-03-10 21:25:05.361561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:25:14.969704 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  21:25:17 up  2:25,  0 users,  load average: 145.18, 132.58, 138.45
	* Linux default-k8s-different-port-20210310205202-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [44043b6a8198] <==
	* Trace[784998046]: [1.1949231s] [1.1949231s] END
	* I0310 21:23:15.482074       1 trace.go:205] Trace[1419955983]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.9 (10-Mar-2021 21:23:14.956) (total time: 525ms):
	* Trace[1419955983]: ---"Object stored in database" 429ms (21:23:00.398)
	* Trace[1419955983]: [525.837ms] [525.837ms] END
	* I0310 21:23:15.488751       1 trace.go:205] Trace[950573369]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-different-port-20210310205202-6496,user-agent:kubelet/v1.20.2 (linux/amd64) kubernetes/faecb19,client:172.17.0.9 (10-Mar-2021 21:23:14.226) (total time: 1196ms):
	* Trace[950573369]: ---"Object stored in database" 1195ms (21:23:00.422)
	* Trace[950573369]: [1.1961732s] [1.1961732s] END
	* I0310 21:23:34.497885       1 trace.go:205] Trace[1569894910]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:23:33.969) (total time: 527ms):
	* Trace[1569894910]: ---"Transaction prepared" 277ms (21:23:00.334)
	* Trace[1569894910]: ---"Transaction committed" 161ms (21:23:00.496)
	* Trace[1569894910]: [527.1746ms] [527.1746ms] END
	* I0310 21:23:36.254238       1 trace.go:205] Trace[94118077]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (10-Mar-2021 21:23:35.702) (total time: 551ms):
	* Trace[94118077]: [551.0116ms] [551.0116ms] END
	* I0310 21:23:36.255628       1 trace.go:205] Trace[1477920185]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,client:172.17.0.1 (10-Mar-2021 21:23:35.697) (total time: 558ms):
	* Trace[1477920185]: ---"Listing from storage done" 552ms (21:23:00.255)
	* Trace[1477920185]: [558.2823ms] [558.2823ms] END
	* I0310 21:23:41.820195       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:23:41.820280       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:23:41.820305       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 21:24:16.231586       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:24:16.232096       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:24:16.232143       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 21:24:56.407050       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:24:56.407246       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:24:56.407444       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [92f2244695b6] <==
	* 	/usr/local/go/src/net/net.go:182 +0x8e
	* crypto/tls.(*atLeastReader).Read(0xc000d7c920, 0xc000daf8c0, 0x205, 0x205, 0x40, 0x45, 0xc000da9130)
	* 	/usr/local/go/src/crypto/tls/conn.go:779 +0x62
	* bytes.(*Buffer).ReadFrom(0xc00045d780, 0x4d9ed80, 0xc000d7c920, 0x40bd05, 0x3f475a0, 0x464b8a0)
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xb1
	* crypto/tls.(*Conn).readFromUntil(0xc00045d500, 0x4da5040, 0xc00000e9d0, 0x5, 0xc00000e9d0, 0xc000da9238)
	* 	/usr/local/go/src/crypto/tls/conn.go:801 +0xf3
	* crypto/tls.(*Conn).readRecordOrCCS(0xc00045d500, 0xc000da9600, 0x6143d7, 0xc000dce480)
	* 	/usr/local/go/src/crypto/tls/conn.go:608 +0x115
	* crypto/tls.(*Conn).readRecord(...)
	* 	/usr/local/go/src/crypto/tls/conn.go:576
	* crypto/tls.(*Conn).readHandshake(0xc00045d500, 0xc000058000, 0xc000da9768, 0x48e91b, 0x48c4fa)
	* 	/usr/local/go/src/crypto/tls/conn.go:992 +0x6d
	* crypto/tls.(*serverHandshakeStateTLS13).readClientCertificate(0xc000da9aa0, 0x8e6, 0x0)
	* 	/usr/local/go/src/crypto/tls/handshake_server_tls13.go:770 +0x170
	* crypto/tls.(*serverHandshakeStateTLS13).handshake(0xc000da9aa0, 0xc000dac400, 0x0)
	* 	/usr/local/go/src/crypto/tls/handshake_server_tls13.go:71 +0x12a
	* crypto/tls.(*Conn).serverHandshake(0xc00045d500, 0xc000d4ad60, 0xf)
	* 	/usr/local/go/src/crypto/tls/handshake_server.go:50 +0xbc
	* crypto/tls.(*Conn).Handshake(0xc00045d500, 0x0, 0x0)
	* 	/usr/local/go/src/crypto/tls/conn.go:1362 +0xc9
	* net/http.(*conn).serve(0xc000da2320, 0x4e10da0, 0xc001044120)
	* 	/usr/local/go/src/net/http/server.go:1817 +0x1a5
	* created by net/http.(*Server).Serve
	* 	/usr/local/go/src/net/http/server.go:2969 +0x36c
	* 
	* ==> kube-controller-manager [bf37cfa32c85] <==
	* I0310 21:02:23.191614       1 shared_informer.go:247] Caches are synced for attach detach 
	* I0310 21:02:23.258138       1 shared_informer.go:247] Caches are synced for stateful set 
	* I0310 21:02:23.258194       1 shared_informer.go:247] Caches are synced for endpoint 
	* I0310 21:02:23.305391       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	* I0310 21:02:23.351277       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 21:02:23.374187       1 disruption.go:339] Sending events to api server.
	* I0310 21:02:23.374161       1 shared_informer.go:247] Caches are synced for deployment 
	* I0310 21:02:23.394635       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:02:23.394684       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:02:23.504250       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	* I0310 21:02:24.704508       1 range_allocator.go:373] Set node default-k8s-different-port-20210310205202-6496 PodCIDR to [10.244.0.0/24]
	* I0310 21:02:26.031533       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 21:02:29.347666       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:02:29.389687       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:02:29.389720       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 21:02:32.693450       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	* E0310 21:02:33.459579       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	* I0310 21:02:34.596622       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-j2jg9"
	* I0310 21:02:36.319712       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-ghd59"
	* I0310 21:02:37.223826       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-dqrb4"
	* E0310 21:02:37.264023       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"61730e88-17b1-4e14-b6aa-8324d9c0be38", ResourceVersion:"289", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751006911, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0014008c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014008e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v
1.LabelSelector)(0xc001400900), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.
GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0013614c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014
00920), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolum
eSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001400940), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil
), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001400980)}}, Resources:v1
.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0011588a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), Restart
Policy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00106f8f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005dfdc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runti
meClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000efc8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00106f958)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* I0310 21:02:48.126395       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 21:02:48.838406       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	* I0310 21:02:50.262753       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-ghd59"
	* I0310 21:03:48.218630       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-proxy [6cc1ac0f0822] <==
	* Trace[1870764284]: [6.1678652s] [6.1678652s] END
	* I0310 21:14:25.660230       1 trace.go:205] Trace[294614200]: "iptables Monitor CANARY check" (10-Mar-2021 21:14:23.004) (total time: 2655ms):
	* Trace[294614200]: [2.6555067s] [2.6555067s] END
	* I0310 21:15:02.786797       1 trace.go:205] Trace[480409953]: "iptables restore" (10-Mar-2021 21:15:00.105) (total time: 2680ms):
	* Trace[480409953]: [2.6809324s] [2.6809324s] END
	* I0310 21:15:25.164398       1 trace.go:205] Trace[744657260]: "iptables restore" (10-Mar-2021 21:15:23.094) (total time: 2069ms):
	* Trace[744657260]: [2.0694799s] [2.0694799s] END
	* I0310 21:16:19.913067       1 trace.go:205] Trace[2050273892]: "iptables save" (10-Mar-2021 21:16:17.436) (total time: 2454ms):
	* Trace[2050273892]: [2.4541013s] [2.4541013s] END
	* I0310 21:16:23.921332       1 trace.go:205] Trace[926147253]: "iptables Monitor CANARY check" (10-Mar-2021 21:16:19.913) (total time: 3993ms):
	* Trace[926147253]: [3.9931075s] [3.9931075s] END
	* I0310 21:17:05.619174       1 trace.go:205] Trace[137995289]: "iptables save" (10-Mar-2021 21:17:00.777) (total time: 4841ms):
	* Trace[137995289]: [4.8418486s] [4.8418486s] END
	* I0310 21:17:08.257883       1 trace.go:205] Trace[830314942]: "iptables save" (10-Mar-2021 21:17:05.619) (total time: 2625ms):
	* Trace[830314942]: [2.6251377s] [2.6251377s] END
	* I0310 21:17:13.918923       1 trace.go:205] Trace[1714446172]: "iptables restore" (10-Mar-2021 21:17:08.304) (total time: 5618ms):
	* Trace[1714446172]: [5.6189044s] [5.6189044s] END
	* I0310 21:17:47.517170       1 trace.go:205] Trace[2066387782]: "iptables restore" (10-Mar-2021 21:17:44.317) (total time: 3199ms):
	* Trace[2066387782]: [3.199319s] [3.199319s] END
	* I0310 21:18:51.451943       1 trace.go:205] Trace[692697592]: "iptables Monitor CANARY check" (10-Mar-2021 21:18:47.227) (total time: 3881ms):
	* Trace[692697592]: [3.8815133s] [3.8815133s] END
	* I0310 21:21:21.772524       1 trace.go:205] Trace[1675863731]: "iptables Monitor CANARY check" (10-Mar-2021 21:21:17.226) (total time: 4545ms):
	* Trace[1675863731]: [4.5456984s] [4.5456984s] END
	* I0310 21:22:52.179734       1 trace.go:205] Trace[445590362]: "iptables Monitor CANARY check" (10-Mar-2021 21:22:47.325) (total time: 4834ms):
	* Trace[445590362]: [4.8343575s] [4.8343575s] END
	* 
	* ==> kube-scheduler [cc170dc9a3a5] <==
	* E0310 21:00:55.467249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:00:55.496015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:00:55.518162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 21:00:55.997465       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:00:56.099435       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:00:56.403617       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 21:00:56.953691       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 21:01:02.276457       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 21:01:02.730159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:01:02.737344       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 21:01:04.083495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:01:04.099290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:01:04.109238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:01:04.756195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 21:01:04.793252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 21:01:05.070358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 21:01:05.637368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 21:01:07.793619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 21:01:07.977719       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 21:01:19.616631       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I0310 21:02:07.860900       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* http2: server: error reading preface from client 127.0.0.1:33206: read tcp 127.0.0.1:10259->127.0.0.1:33206: read: connection reset by peer
	* I0310 21:16:40.096685       1 trace.go:205] Trace[247781435]: "Scheduling" namespace:default,name:busybox (10-Mar-2021 21:16:39.969) (total time: 123ms):
	* Trace[247781435]: ---"Computing predicates done" 121ms (21:16:00.092)
	* Trace[247781435]: [123.0195ms] [123.0195ms] END
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:52:50 UTC, end at Wed 2021-03-10 21:25:46 UTC. --
	* Mar 10 21:18:59 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:18:59.079654    3401 pod_workers.go:191] Error syncing pod ee642075-256f-4ee8-896d-23e79a3cd1a6 ("busybox_default(ee642075-256f-4ee8-896d-23e79a3cd1a6)"), skipping: failed to "CreatePodSandbox" for "busybox_default(ee642075-256f-4ee8-896d-23e79a3cd1a6)" with CreatePodSandboxError: "CreatePodSandbox for pod \"busybox_default(ee642075-256f-4ee8-896d-23e79a3cd1a6)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"busybox\": operation timeout: context deadline exceeded"
	* Mar 10 21:19:01 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:19:01.619241    3401 kuberuntime_manager.go:965] PodSandboxStatus of sandbox "0740d9d8e0c7eb6f5e404e682cef855a73b5d22bdcea215760d41bdbfdbae962" for pod "busybox_default(ee642075-256f-4ee8-896d-23e79a3cd1a6)" error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	* Mar 10 21:19:11 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:19:11.659441    3401 scope.go:95] [topologymanager] RemoveContainer - Container ID: c4f9eb4e103c512fa7f0880b5778e4450fffb041bd1aa1e73f14c271ebf1d6d6
	* Mar 10 21:19:17 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:19:17.500467    3401 pod_container_deletor.go:79] Container "0740d9d8e0c7eb6f5e404e682cef855a73b5d22bdcea215760d41bdbfdbae962" not found in pod's containers
	* Mar 10 21:20:06 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:20:06.957090    3401 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
	* Mar 10 21:20:11 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:20:11.456967    3401 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
	* Mar 10 21:20:14 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:20:14.072710    3401 pod_container_deletor.go:79] Container "0740d9d8e0c7eb6f5e404e682cef855a73b5d22bdcea215760d41bdbfdbae962" not found in pod's containers
	* Mar 10 21:20:20 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:20:20.101744    3401 remote_image.go:113] PullImage "busybox:1.28.4-glibc" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout
	* Mar 10 21:20:20 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:20:20.190301    3401 kuberuntime_image.go:51] Pull image "busybox:1.28.4-glibc" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout
	* Mar 10 21:20:20 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:20:20.331369    3401 kuberuntime_manager.go:829] container &Container{Name:busybox,Image:busybox:1.28.4-glibc,Command:[sleep 3600],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tldgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod busybox_default(ee642075-256f-4ee8-896d-23e79a3cd1a6): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/:
net/http: TLS handshake timeout
	* Mar 10 21:20:20 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:20:20.332038    3401 pod_workers.go:191] Error syncing pod ee642075-256f-4ee8-896d-23e79a3cd1a6 ("busybox_default(ee642075-256f-4ee8-896d-23e79a3cd1a6)"), skipping: failed to "StartContainer" for "busybox" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout"
	* Mar 10 21:20:45 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:20:45.518208    3401 trace.go:205] Trace[629366232]: "iptables Monitor CANARY check" (10-Mar-2021 21:20:36.168) (total time: 9350ms):
	* Mar 10 21:20:45 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[629366232]: [9.350317s] [9.350317s] END
	* Mar 10 21:21:28 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:21:28.346636    3401 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8444/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-different-port-20210310205202-6496?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	* Mar 10 21:21:31 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:21:31.560679    3401 controller.go:187] failed to update lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "default-k8s-different-port-20210310205202-6496": the object has been modified; please apply your changes to the latest version and try again
	* Mar 10 21:21:42 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:21:42.477994    3401 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod5750970b-b6e6-4283-839d-d9eaddeb5c46/b1ac8f2ee561da93436846ae5a4a8495eac5f48b211e22ea2a65fb90987f1b3a": RecentStats: unable to find data in memory cache]
	* Mar 10 21:22:03 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:22:03.717140    3401 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/docker/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/kubepods/besteffort/pod5750970b-b6e6-4283-839d-d9eaddeb5c46/b1ac8f2ee561da93436846ae5a4a8495eac5f48b211e22ea2a65fb90987f1b3a": RecentStats: unable to find data in memory cache]
	* Mar 10 21:22:03 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:22:03.958840    3401 scope.go:95] [topologymanager] RemoveContainer - Container ID: c4f9eb4e103c512fa7f0880b5778e4450fffb041bd1aa1e73f14c271ebf1d6d6
	* Mar 10 21:22:04 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:22:04.020587    3401 scope.go:95] [topologymanager] RemoveContainer - Container ID: b1ac8f2ee561da93436846ae5a4a8495eac5f48b211e22ea2a65fb90987f1b3a
	* Mar 10 21:22:40 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:22:40.729246    3401 trace.go:205] Trace[689233256]: "iptables Monitor CANARY check" (10-Mar-2021 21:22:34.398) (total time: 6330ms):
	* Mar 10 21:22:40 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[689233256]: [6.3309429s] [6.3309429s] END
	* Mar 10 21:23:02 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:23:02.840783    3401 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
	* Mar 10 21:23:24 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:23:24.440210    3401 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 21:23:24 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:23:24.466259    3401 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 21:23:37 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:23:37.867444    3401 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
	* 
	* ==> storage-provisioner [b1ac8f2ee561] <==
	* I0310 21:19:29.205469       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 21:19:34.831660       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 21:19:34.837310       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 21:19:51.671120       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 21:19:51.736026       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c108f6b-3b35-40a4-8699-f723e5b7fdae", APIVersion:"v1", ResourceVersion:"916", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210310205202-6496_48a0dbaa-00ac-4c4f-bda6-bf215ce69226 became leader
	* I0310 21:19:51.740705       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210310205202-6496_48a0dbaa-00ac-4c4f-bda6-bf215ce69226!
	* I0310 21:19:54.789719       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210310205202-6496_48a0dbaa-00ac-4c4f-bda6-bf215ce69226!
	* I0310 21:21:24.970379       1 leaderelection.go:288] failed to renew lease kube-system/k8s.io-minikube-hostpath: failed to tryAcquireOrRenew context deadline exceeded
	* F0310 21:21:24.970473       1 controller.go:877] leaderelection lost
	* 
	* ==> storage-provisioner [d9ef8592e56c] <==
	* I0310 21:22:27.332180       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 21:22:28.363481       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 21:22:28.363633       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 21:22:51.732341       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 21:22:51.862141       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210310205202-6496_86c78217-062d-4b0b-954c-3f969e81226b!
	* I0310 21:22:52.158460       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c108f6b-3b35-40a4-8699-f723e5b7fdae", APIVersion:"v1", ResourceVersion:"997", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210310205202-6496_86c78217-062d-4b0b-954c-3f969e81226b became leader
	* I0310 21:23:00.488906       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210310205202-6496_86c78217-062d-4b0b-954c-3f969e81226b!
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                      Args                      |                    Profile                     |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                             | force-systemd-env-20210310201637-6496          | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:49:41 GMT | Wed, 10 Mar 2021 20:50:17 GMT |
	|         | force-systemd-env-20210310201637-6496          |                                                |                         |         |                               |                               |
	| -p      | cert-options-20210310203249-6496               | cert-options-20210310203249-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:50:36 GMT | Wed, 10 Mar 2021 20:50:43 GMT |
	|         | ssh openssl x509 -text -noout -in              |                                                |                         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt          |                                                |                         |         |                               |                               |
	| delete  | -p                                             | cert-options-20210310203249-6496               | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:10 GMT | Wed, 10 Mar 2021 20:51:56 GMT |
	|         | cert-options-20210310203249-6496               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | disable-driver-mounts-20210310205156-6496      | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:57 GMT | Wed, 10 Mar 2021 20:52:02 GMT |
	|         | disable-driver-mounts-20210310205156-6496      |                                                |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210310203447-6496         | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:53:03 GMT | Wed, 10 Mar 2021 20:53:44 GMT |
	|         | ssh docker info --format                       |                                                |                         |         |                               |                               |
	|         |                               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:54:07 GMT | Wed, 10 Mar 2021 20:54:36 GMT |
	|         | force-systemd-flag-20210310203447-6496         |                                                |                         |         |                               |                               |
	| stop    | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:19 GMT | Wed, 10 Mar 2021 21:02:40 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:42 GMT | Wed, 10 Mar 2021 21:02:42 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496                | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:07:05 GMT | Wed, 10 Mar 2021 21:08:33 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| start   | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:52:21 GMT | Wed, 10 Mar 2021 21:09:23 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                |                                                |                         |         |                               |                               |
	|         | -v=1 --driver=docker                           |                                                |                         |         |                               |                               |
	| logs    | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:09:23 GMT | Wed, 10 Mar 2021 21:10:51 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:10:52 GMT | Wed, 10 Mar 2021 21:11:13 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | running-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:45 GMT | Wed, 10 Mar 2021 21:12:11 GMT |
	|         | running-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| stop    | -p                                             | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:12:38 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:40 GMT | Wed, 10 Mar 2021 21:12:41 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	| -p      | kubernetes-upgrade-20210310201637-6496         | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:50 GMT | Wed, 10 Mar 2021 21:15:02 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:15 GMT | Wed, 10 Mar 2021 21:15:46 GMT |
	|         | kubernetes-upgrade-20210310201637-6496         |                                                |                         |         |                               |                               |
	| delete  | -p                                             | missing-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:38 GMT | Wed, 10 Mar 2021 21:16:03 GMT |
	|         | missing-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| -p      | default-k8s-different-port-20210310205202-6496 | default-k8s-different-port-20210310205202-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:16:15 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| stop    | -p                                             | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:57 GMT | Wed, 10 Mar 2021 21:16:31 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:16:33 GMT | Wed, 10 Mar 2021 21:16:34 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	| delete  | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:18:53 GMT | Wed, 10 Mar 2021 21:19:16 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:20:59 GMT | Wed, 10 Mar 2021 21:21:26 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496                | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:20:59 GMT | Wed, 10 Mar 2021 21:24:36 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:24:57 GMT | Wed, 10 Mar 2021 21:25:18 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 21:25:19
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 21:25:19.133084    9020 out.go:239] Setting OutFile to fd 1756 ...
	* I0310 21:25:19.135222    9020 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:25:19.135222    9020 out.go:252] Setting ErrFile to fd 1852...
	* I0310 21:25:19.135222    9020 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:25:19.152022    9020 out.go:246] Setting JSON to false
	* I0310 21:25:19.156890    9020 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36985,"bootTime":1615374534,"procs":116,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 21:25:19.157501    9020 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 21:25:19.172510    9020 out.go:129] * [kindnet-20210310212518-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 21:25:19.175856    9020 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 21:25:19.182537    9020 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 21:25:19.743858    9020 docker.go:119] docker version: linux-20.10.2
	* I0310 21:25:19.752225    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:25:20.728372    9020 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:25:20.2965852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:25:20.737118    9020 out.go:129] * Using the docker driver based on user configuration
	* I0310 21:25:20.737414    9020 start.go:276] selected driver: docker
	* I0310 21:25:20.737414    9020 start.go:718] validating driver "docker" against <nil>
	* I0310 21:25:20.737414    9020 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 21:25:21.867711    9020 out.go:129] 
	* W0310 21:25:21.867711    9020 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	* W0310 21:25:21.868710    9020 out.go:191] * Suggestion: 
	* 
	*     1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	*     2. Click "Settings"
	*     3. Click "Resources"
	*     4. Increase "Memory" slider bar to 2.25 GB or higher
	*     5. Click "Apply & Restart"
	* W0310 21:25:21.868710    9020 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* I0310 21:25:21.872840    9020 out.go:129] 
	* I0310 21:25:21.896212    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:25:22.825433    9020 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:25:22.4517093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:25:22.826095    9020 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 21:25:22.827624    9020 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 21:25:22.831822    9020 cni.go:74] Creating CNI manager for "kindnet"
	* I0310 21:25:22.831822    9020 start_flags.go:393] Found "CNI" CNI - setting NetworkPlugin=cni
	* I0310 21:25:22.832091    9020 start_flags.go:398] config:
	* {Name:kindnet-20210310212518-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kindnet-20210310212518-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:25:22.835545    9020 out.go:129] * Starting control plane node kindnet-20210310212518-6496 in cluster kindnet-20210310212518-6496
	* I0310 21:25:23.464130    9020 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 21:25:23.464623    9020 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 21:25:23.464984    9020 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:25:23.465690    9020 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:25:23.466115    9020 cache.go:54] Caching tarball of preloaded images
	* I0310 21:25:23.466480    9020 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 21:25:23.466693    9020 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 21:25:23.467119    9020 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\config.json ...
	* I0310 21:25:23.467579    9020 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\config.json: {Name:mk7fa7cd396dbde3f1faddb1c3bfe41f85b8368d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:25:23.484672    9020 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 21:25:23.485524    9020 start.go:313] acquiring machines lock for kindnet-20210310212518-6496: {Name:mkbdbdc880a7102685c8f1577e1afbb64ac3b053 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:25:23.485953    9020 start.go:317] acquired machines lock for "kindnet-20210310212518-6496" in 429.1??s
	* I0310 21:25:23.486191    9020 start.go:89] Provisioning new machine with config: &{Name:kindnet-20210310212518-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kindnet-20210310212518-6496 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 21:25:23.486492    9020 start.go:126] createHost starting for "" (driver="docker")
	* I0310 21:25:23.490814    9020 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	* I0310 21:25:23.491566    9020 start.go:160] libmachine.API.Create for "kindnet-20210310212518-6496" (driver="docker")
	* I0310 21:25:23.497266    9020 client.go:168] LocalClient.Create starting
	* I0310 21:25:23.497266    9020 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	* I0310 21:25:23.497266    9020 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:25:23.498271    9020 main.go:121] libmachine: Parsing certificate...
	* I0310 21:25:23.498271    9020 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	* I0310 21:25:23.498271    9020 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:25:23.498271    9020 main.go:121] libmachine: Parsing certificate...
	* I0310 21:25:23.531644    9020 cli_runner.go:115] Run: docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 21:25:24.155279    9020 cli_runner.go:162] docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 21:25:24.171193    9020 network_create.go:240] running [docker network inspect kindnet-20210310212518-6496] to gather additional debugging logs...
	* I0310 21:25:24.171193    9020 cli_runner.go:115] Run: docker network inspect kindnet-20210310212518-6496
	* W0310 21:25:24.771642    9020 cli_runner.go:162] docker network inspect kindnet-20210310212518-6496 returned with exit code 1
	* I0310 21:25:24.771642    9020 network_create.go:243] error running [docker network inspect kindnet-20210310212518-6496]: docker network inspect kindnet-20210310212518-6496: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: kindnet-20210310212518-6496
	* I0310 21:25:24.771642    9020 network_create.go:245] output of [docker network inspect kindnet-20210310212518-6496]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: kindnet-20210310212518-6496
	* 
	* ** /stderr **
	* I0310 21:25:24.783530    9020 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 21:25:25.405735    9020 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 21:25:25.405735    9020 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: kindnet-20210310212518-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 21:25:25.413187    9020 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210310212518-6496
	* I0310 21:25:26.454282    9020 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20210310212518-6496: (1.0407404s)
	* I0310 21:25:26.455324    9020 kic.go:102] calculated static IP "192.168.49.97" for the "kindnet-20210310212518-6496" container
	* I0310 21:25:26.478701    9020 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 21:25:27.093079    9020 cli_runner.go:115] Run: docker volume create kindnet-20210310212518-6496 --label name.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --label created_by.minikube.sigs.k8s.io=true
	* I0310 21:25:27.778804    9020 oci.go:102] Successfully created a docker volume kindnet-20210310212518-6496
	* I0310 21:25:27.781396    9020 cli_runner.go:115] Run: docker run --rm --name kindnet-20210310212518-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --entrypoint /usr/bin/test -v kindnet-20210310212518-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 21:25:32.576233    9020 cli_runner.go:168] Completed: docker run --rm --name kindnet-20210310212518-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --entrypoint /usr/bin/test -v kindnet-20210310212518-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (4.7948676s)
	* I0310 21:25:32.576802    9020 oci.go:106] Successfully prepared a docker volume kindnet-20210310212518-6496
	* I0310 21:25:32.576802    9020 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:25:32.577293    9020 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:25:32.577293    9020 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 21:25:32.584923    9020 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210310212518-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* I0310 21:25:32.584923    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* W0310 21:25:33.297424    9020 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210310212518-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	* I0310 21:25:33.297424    9020 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20210310212518-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	* stdout:
	* 
	* stderr:
	* docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	* 
	* The notification platform is unavailable.
	* 	���
	* 
	* ���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	*    at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	* �������?8
	* CreateToastNotifier
	* Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	* Windows.UI.Notifications.ToastNotificationManager
	* Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	* ���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	* ���+The notification platform is unavailable.
	* 	������������RestrictedErrorReference
	* 	
���
���������RestrictedCapabilitySid
	* 	������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	* See 'docker run --help'.
	* I0310 21:25:33.678447    9020 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0935307s)
	* I0310 21:25:33.679066    9020 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:71 SystemTime:2021-03-10 21:25:33.1546865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:25:33.693718    9020 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 21:25:34.850661    9020 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1569497s)
	* I0310 21:25:34.863151    9020 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20210310212518-6496 --name kindnet-20210310212518-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --network kindnet-20210310212518-6496 --ip 192.168.49.97 --volume kindnet-20210310212518-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 21:25:38.433874    9020 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20210310212518-6496 --name kindnet-20210310212518-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20210310212518-6496 --network kindnet-20210310212518-6496 --ip 192.168.49.97 --volume kindnet-20210310212518-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (3.570038s)
	* I0310 21:25:38.445063    9020 cli_runner.go:115] Run: docker container inspect kindnet-20210310212518-6496 --format=
	* I0310 21:25:39.083330    9020 cli_runner.go:115] Run: docker container inspect kindnet-20210310212518-6496 --format=
	* I0310 21:25:39.699176    9020 cli_runner.go:115] Run: docker exec kindnet-20210310212518-6496 stat /var/lib/dpkg/alternatives/iptables
	* I0310 21:25:40.825971    9020 cli_runner.go:168] Completed: docker exec kindnet-20210310212518-6496 stat /var/lib/dpkg/alternatives/iptables: (1.1263832s)
	* I0310 21:25:40.825971    9020 oci.go:278] the created container "kindnet-20210310212518-6496" has a running status.
	* I0310 21:25:40.825971    9020 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa...
	* I0310 21:25:40.987468    9020 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 21:25:42.006890    9020 cli_runner.go:115] Run: docker container inspect kindnet-20210310212518-6496 --format=
	* I0310 21:25:42.643577    9020 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	* I0310 21:25:42.643577    9020 kic_runner.go:115] Args: [docker exec --privileged kindnet-20210310212518-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	* I0310 21:25:43.646440    9020 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa...
	* I0310 21:25:46.096020   16712 out.go:150]   - Configuring RBAC rules ...
	* I0310 21:25:44.460620    9020 cli_runner.go:115] Run: docker container inspect kindnet-20210310212518-6496 --format=
	* I0310 21:25:45.118327    9020 machine.go:88] provisioning docker machine ...
	* I0310 21:25:45.118662    9020 ubuntu.go:169] provisioning hostname "kindnet-20210310212518-6496"
	* I0310 21:25:45.127519    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:45.733245    9020 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:25:45.739492    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	* I0310 21:25:45.739492    9020 main.go:121] libmachine: About to run SSH command:
	* sudo hostname kindnet-20210310212518-6496 && echo "kindnet-20210310212518-6496" | sudo tee /etc/hostname
	* I0310 21:25:46.892778    9020 main.go:121] libmachine: SSH cmd err, output: <nil>: kindnet-20210310212518-6496
	* 
	* I0310 21:25:46.901274    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:47.510414    9020 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:25:47.513743    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	* I0310 21:25:47.513997    9020 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\skindnet-20210310212518-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20210310212518-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 kindnet-20210310212518-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:25:48.107941    9020 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:25:48.107941    9020 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:25:48.107941    9020 ubuntu.go:177] setting up certificates
	* I0310 21:25:48.107941    9020 provision.go:83] configureAuth start
	* I0310 21:25:48.116027    9020 cli_runner.go:115] Run: docker container inspect -f "" kindnet-20210310212518-6496
	* I0310 21:25:48.708691    9020 provision.go:137] copyHostCerts
	* I0310 21:25:48.709980    9020 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:25:48.709980    9020 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:25:48.710571    9020 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:25:48.713950    9020 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:25:48.713950    9020 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:25:48.713950    9020 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:25:48.716125    9020 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:25:48.716125    9020 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:25:48.717581    9020 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:25:48.720461    9020 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.kindnet-20210310212518-6496 san=[192.168.49.97 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20210310212518-6496]
	* I0310 21:25:49.098088    9020 provision.go:165] copyRemoteCerts
	* I0310 21:25:49.109248    9020 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:25:49.116427    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:49.653578    9020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55213 SSHKeyPath:C:\Users\jenkins\.minikube\machines\kindnet-20210310212518-6496\id_rsa Username:docker}
	* I0310 21:25:50.174980    9020 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.0657385s)
	* I0310 21:25:50.176116    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	* I0310 21:25:50.666393    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:25:51.152917    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	* I0310 21:25:51.524341    9020 provision.go:86] duration metric: configureAuth took 3.4164203s
	* I0310 21:25:51.524341    9020 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:25:51.535902    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:52.068791    9020 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:25:52.069183    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	* I0310 21:25:52.069183    9020 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:25:52.713252    9020 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:25:52.713449    9020 ubuntu.go:71] root file system type: overlay
	* I0310 21:25:52.714163    9020 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:25:52.714988    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	* I0310 21:25:53.237040    9020 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:25:53.237493    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	* I0310 21:25:53.238129    9020 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:25:16.852989    7328 out.go:340] unable to execute * 2021-03-10 21:24:51.589519 W | etcdserver: request "header:<ID:11303041234760732539 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1069 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1056 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (187.5436ms) to execute
	: html/template:* 2021-03-10 21:24:51.589519 W | etcdserver: request "header:<ID:11303041234760732539 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1069 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1056 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (187.5436ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:25:57.244059    7328 out.go:335] unable to parse "* I0310 21:25:19.752225    9020 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:25:19.752225    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:25:57.308420    7328 out.go:335] unable to parse "* I0310 21:25:21.896212    9020 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:25:21.896212    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:25:57.475069    7328 out.go:340] unable to execute * I0310 21:25:23.531644    9020 cli_runner.go:115] Run: docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:25:23.531644    9020 cli_runner.go:115] Run: docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:283: executing "* I0310 21:25:23.531644    9020 cli_runner.go:115] Run: docker network inspect kindnet-20210310212518-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:25:57.494989    7328 out.go:340] unable to execute * W0310 21:25:24.155279    9020 cli_runner.go:162] docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 21:25:24.155279    9020 cli_runner.go:162] docker network inspect kindnet-20210310212518-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:278: executing "* W0310 21:25:24.155279    9020 cli_runner.go:162] docker network inspect kindnet-20210310212518-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:25:57.570371    7328 out.go:340] unable to execute * I0310 21:25:24.783530    9020 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:25:24.783530    9020 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 21:25:24.783530    9020 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:25:57.632263    7328 out.go:335] unable to parse "* I0310 21:25:32.584923    9020 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:25:32.584923    9020 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:25:57.838620    7328 out.go:335] unable to parse "* I0310 21:25:33.678447    9020 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0935307s)\n": template: * I0310 21:25:33.678447    9020 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0935307s)
	:1: function "json" not defined - returning raw string.
	E0310 21:25:57.866118    7328 out.go:335] unable to parse "* I0310 21:25:33.693718    9020 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 21:25:33.693718    9020 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 21:25:57.874114    7328 out.go:335] unable to parse "* I0310 21:25:34.850661    9020 cli_runner.go:168] Completed: docker info --format \"'{{json .SecurityOptions}}'\": (1.1569497s)\n": template: * I0310 21:25:34.850661    9020 cli_runner.go:168] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.1569497s)
	:1: function "json" not defined - returning raw string.
	E0310 21:25:57.939982    7328 out.go:340] unable to execute * I0310 21:25:45.127519    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:45.127519    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:45.127519    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:25:57.952062    7328 out.go:335] unable to parse "* I0310 21:25:45.739492    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}\n": template: * I0310 21:25:45.739492    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:25:57.978914    7328 out.go:340] unable to execute * I0310 21:25:46.901274    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:46.901274    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:46.901274    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:25:57.988134    7328 out.go:335] unable to parse "* I0310 21:25:47.513743    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}\n": template: * I0310 21:25:47.513743    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:25:58.165716    7328 out.go:340] unable to execute * I0310 21:25:49.116427    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:49.116427    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:49.116427    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:25:58.202835    7328 out.go:340] unable to execute * I0310 21:25:51.535902    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:51.535902    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:51.535902    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:25:58.217179    7328 out.go:335] unable to parse "* I0310 21:25:52.069183    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}\n": template: * I0310 21:25:52.069183    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:25:58.277107    7328 out.go:340] unable to execute * I0310 21:25:52.714988    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	: template: * I0310 21:25:52.714988    9020 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20210310212518-6496
	:1:96: executing "* I0310 21:25:52.714988    9020 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" kindnet-20210310212518-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:25:58.290378    7328 out.go:335] unable to parse "* I0310 21:25:53.237493    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}\n": template: * I0310 21:25:53.237493    9020 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55213 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496: (7.1613744s)
helpers_test.go:257: (dbg) Run:  kubectl --context default-k8s-different-port-20210310205202-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context default-k8s-different-port-20210310205202-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210310205202-6496 describe pod : exit status 1 (245.7298ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context default-k8s-different-port-20210310205202-6496 describe pod : exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect default-k8s-different-port-20210310205202-6496
helpers_test.go:231: (dbg) docker inspect default-k8s-different-port-20210310205202-6496:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63",
	        "Created": "2021-03-10T20:52:32.7671922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 234137,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-03-10T20:52:38.8381413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a776c544501ab7f8d55c0f9d8df39bc284df5e744ef1ab4fa59bbd753c98d5f6",
	        "ResolvConfPath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/hosts",
	        "LogPath": "/var/lib/docker/containers/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63-json.log",
	        "Name": "/default-k8s-different-port-20210310205202-6496",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20210310205202-6496:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe-init/diff:/var/lib/docker/overlay2/e5312d15a8a915e93a04215cc746ab0f3ee777399eec34d39c62e1159ded6475/diff:/var/lib/docker/overlay2/7a0a882052451678339ecb9d7440ec6d5af2189761df37cbde268f01c77df460/diff:/var/lib/docker/overlay2/746bde091f748e661ba6c95c88447941058b3ed556ae8093cbbb4e946b8cc8fc/diff:/var/lib/docker/overlay2/fc816087ea38e2594891ac87d2dada91401ed5005ffde006568049c9be7a9cb3/diff:/var/lib/docker/overlay2/c938fe5d14accfe7fb1cfd038f5f0c36e7c4e36df3f270be6928a759dfc0318c/diff:/var/lib/docker/overlay2/79f1d61f3af3e525b3f9a25c7bdaddec8275b7d0abab23c045e084119ae755dc/diff:/var/lib/docker/overlay2/90cf1530750ebebdcb04a136dc1eff0ae9c5f7d0f826d2942e3bc67cc0d42206/diff:/var/lib/docker/overlay2/c4edb0a62bb52174a4a17530aa58139aaca1b2f67bef36b9ac59af93c93e2c94/diff:/var/lib/docker/overlay2/684c18bb05910b09352e8708238b2fa6512de91e323e59bd78eacb12954baeb1/diff:/var/lib/docker/overlay2/2e3b94
4d36fea0e106f5f8885b981d040914c224a656b6480e4ccf00c624527c/diff:/var/lib/docker/overlay2/87489103e2d5dffa2fc32cae802418fe57adcbe87d86e2828e51be8620ef72d6/diff:/var/lib/docker/overlay2/1ceb557df677600f55a42e739f85c2aaa6ce2a1311fb93d5cc655d3e5c25ad62/diff:/var/lib/docker/overlay2/a49e38adf04466e822fa1b2fef7a5de41bb43b706f64d6d440f7e9f95f86634d/diff:/var/lib/docker/overlay2/9dc51530b3628075c33d53a96d8db831d206f65190ca62a4730d525c9d2433bc/diff:/var/lib/docker/overlay2/53f43d443cc953bcbb3b9fe305be45d40de2233dd6a1e94a6e0120c143df01b9/diff:/var/lib/docker/overlay2/5449e801ebacbe613538db4de658f2419f742a0327f7c0f5079ee525f64c2f45/diff:/var/lib/docker/overlay2/ea06a62429770eade2e9df8b04495201586fe6a867b2d3f827db06031f237171/diff:/var/lib/docker/overlay2/d6c5e6e0bb04112b08a45e5a42f364eaa6a1d4d0384b8833c6f268a32b51f9fd/diff:/var/lib/docker/overlay2/6ccb699e5aa7954f17536e35273361a64a06f22e3951b5adad1acd3645302d27/diff:/var/lib/docker/overlay2/004fc7885cd3ddf4f532a6047a393b87947996f738f36ee3e048130b82676264/diff:/var/lib/d
ocker/overlay2/28ff0102771d8fb3c2957c482cc5dde1a6d041144f1be78d385fd4a305b89048/diff:/var/lib/docker/overlay2/7cc8b00a8717390d3cf217bc23ebf0a54386c4dda751f8efe67a113c5f724000/diff:/var/lib/docker/overlay2/4f20f5888dc26fc4b177fad4ffbb2c9e5094bbe36dffc15530b13ee98b5ee0de/diff:/var/lib/docker/overlay2/5070ba0e8a3df0c0cb4c34bb528c4e1aa4cf13d689f2bbbb0a4033d021c3be55/diff:/var/lib/docker/overlay2/85fb29f9d3603bf7f2eb144a096a9fa432e57ecf575b0c26cc8e06a79e791202/diff:/var/lib/docker/overlay2/27786674974d44dc21a18719c8c334da44a78f825923f5e86233f5acb0fb9a6b/diff:/var/lib/docker/overlay2/6161fd89f27732c46f547c0e38ff9f16ab0dded81cef66938df3935f21afad40/diff:/var/lib/docker/overlay2/222f85b8439a41d62b9939a4060303543930a89e81bc2fbb3f5211b3bcc73501/diff:/var/lib/docker/overlay2/0ab769dc143c949b9367c48721351c8571d87199847b23acedce0134a8cf4d0d/diff:/var/lib/docker/overlay2/a3d1b5f3c1ccf55415e08e306d69739f2d6a4346a4b20ff9273ac1417c146a72/diff:/var/lib/docker/overlay2/a6111408a4deb25c6259bdabf49b0b77a4ae38a1c9c76c9dd238ab00e43
77edd/diff:/var/lib/docker/overlay2/beb22abaee6a1cc2ce6935aa31a8ea6aa14ba428d35b5c2423d1d9e4b5b1e88e/diff:/var/lib/docker/overlay2/0c228b1d0cd4cc1d3df10a7397af2a91846bfbad745d623822dc200ff7ddd4f7/diff:/var/lib/docker/overlay2/3c7f265116e8b14ac4c171583da8e68ad42c63e9faa735c10d5b01c2889c03d8/diff:/var/lib/docker/overlay2/85c9b77072d5575bcf4bc334d0e85cccec247e2ff21bc8a181ee7c1411481a92/diff:/var/lib/docker/overlay2/1b4797f63f1bf0acad8cd18715c737239cbce844810baf1378b65224c01957ff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49def441c4e7bb61f97cac838d6cef129db59d4118b65b5e2a40bbb0d1251cfe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20210310205202-6496",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20210310205202-6496/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20210310205202-6496",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20210310205202-6496",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20210310205202-6496",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ae5c85c25a3c7d419e72b038d8b798aa18b164a3a6e0b21ae322faf61bd068b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55156"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55155"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55154"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55153"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7ae5c85c25a3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "7194d14c86ce93fa12517fc138a3f9fa7090df0ea11ec55d163012ce5bbfba6d",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.9",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:09",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "08a9fabe5ecafdd8ee96dd0a14d1906037e82623ac7365c62552213da5f59cf7",
	                    "EndpointID": "7194d14c86ce93fa12517fc138a3f9fa7090df0ea11ec55d163012ce5bbfba6d",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.9",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:09",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496
helpers_test.go:235: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496: (11.3725861s)
helpers_test.go:240: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-different-port-20210310205202-6496 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-different-port-20210310205202-6496 logs -n 25: (3m3.9980359s)
helpers_test.go:248: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Wed 2021-03-10 20:52:50 UTC, end at Wed 2021-03-10 21:27:48 UTC. --
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.130829100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.131104300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.575176400Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Mar 10 20:56:29 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:29.746203500Z" level=info msg="Loading containers: start."
	* Mar 10 20:56:33 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:33.001193200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Mar 10 20:56:34 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:34.156088300Z" level=info msg="Loading containers: done."
	* Mar 10 20:56:34 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:34.887786200Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=overlay2 version=20.10.3
	* Mar 10 20:56:34 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:34.887917600Z" level=info msg="Daemon has completed initialization"
	* Mar 10 20:56:35 default-k8s-different-port-20210310205202-6496 systemd[1]: Started Docker Application Container Engine.
	* Mar 10 20:56:35 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:35.920699200Z" level=info msg="API listen on [::]:2376"
	* Mar 10 20:56:36 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T20:56:36.104917000Z" level=info msg="API listen on /var/run/docker.sock"
	* Mar 10 21:00:22 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:00:22.007204800Z" level=info msg="ignoring event" container=92f2244695b68ec5eaa63f7b57f392115892164fdf6a258ac5efb8aeae302062 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:04:47 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:47.006756600Z" level=error msg="stream copy error: reading from a closed fifo"
	* Mar 10 21:04:47 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:47.014853400Z" level=error msg="stream copy error: reading from a closed fifo"
	* Mar 10 21:04:50 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:50.700968500Z" level=error msg="93932b9ba010da072ccf1f4a473432df0e38f09776fcf739444b815cc718eadc cleanup: failed to delete container from containerd: no such container"
	* Mar 10 21:04:50 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:04:50.701074500Z" level=error msg="Handler for POST /v1.40/containers/93932b9ba010da072ccf1f4a473432df0e38f09776fcf739444b815cc718eadc/start returned error: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: read init-p: connection reset by peer: unknown"
	* Mar 10 21:13:09 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:13:09.324341200Z" level=info msg="ignoring event" container=af28c036766178fda8a8c02586fbc343f12e3fea03619d61d1a225860811ea29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:17:48 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:17:48.332695600Z" level=info msg="ignoring event" container=c4f9eb4e103c512fa7f0880b5778e4450fffb041bd1aa1e73f14c271ebf1d6d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:19:03 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:19:03.230209300Z" level=error msg="Handler for GET /v1.40/containers/0740d9d8e0c7eb6f5e404e682cef855a73b5d22bdcea215760d41bdbfdbae962/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
	* Mar 10 21:19:03 default-k8s-different-port-20210310205202-6496 dockerd[747]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
	* Mar 10 21:19:15 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:19:15.841492100Z" level=info msg="ignoring event" container=0740d9d8e0c7eb6f5e404e682cef855a73b5d22bdcea215760d41bdbfdbae962 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Mar 10 21:20:19 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:20:19.451164000Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout"
	* Mar 10 21:20:19 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:20:19.535335800Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout"
	* Mar 10 21:20:19 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:20:19.861492000Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout"
	* Mar 10 21:21:32 default-k8s-different-port-20210310205202-6496 dockerd[747]: time="2021-03-10T21:21:32.170500500Z" level=info msg="ignoring event" container=b1ac8f2ee561da93436846ae5a4a8495eac5f48b211e22ea2a65fb90987f1b3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 5be7b98539266       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1   5 minutes ago       Running             busybox                   0                   7ffcf84cf457b
	* d9ef8592e56c3       85069258b98ac                                                                     5 minutes ago       Running             storage-provisioner       3                   ea898f38edc88
	* b1ac8f2ee561d       85069258b98ac                                                                     8 minutes ago       Exited              storage-provisioner       2                   ea898f38edc88
	* 0ce70105ef45e       bfe3a36ebd252                                                                     22 minutes ago      Running             coredns                   0                   75a0f76d075a8
	* 6cc1ac0f08225       43154ddb57a83                                                                     22 minutes ago      Running             kube-proxy                0                   b2f343ffc28df
	* bf37cfa32c856       a27166429d98e                                                                     26 minutes ago      Running             kube-controller-manager   1                   656bdeac8baf4
	* cc170dc9a3a55       ed2c44fbdd78b                                                                     28 minutes ago      Running             kube-scheduler            0                   4e85db81a5fea
	* 92f2244695b68       a27166429d98e                                                                     29 minutes ago      Exited              kube-controller-manager   0                   656bdeac8baf4
	* 44043b6a8198a       a8c2fdb8bf76e                                                                     29 minutes ago      Running             kube-apiserver            0                   9c8384d7f0b51
	* 69efae781c0b4       0369cf4303ffd                                                                     29 minutes ago      Running             etcd                      0                   6ab27d612d9ec
	* 
	* ==> coredns [0ce70105ef45] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* I0310 21:05:53.292911       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:32.2685215 +0000 UTC m=+1.508836001) (total time: 21.0144185s):
	* Trace[2019727887]: [21.0144185s] [21.0144185s] END
	* E0310 21:05:53.293004       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:05:53.293061       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:32.268399 +0000 UTC m=+1.508713501) (total time: 21.0250283s):
	* Trace[1427131847]: [21.0250283s] [21.0250283s] END
	* E0310 21:05:53.293070       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:05:53.293100       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:32.2658815 +0000 UTC m=+1.506196001) (total time: 21.0277142s):
	* Trace[939984059]: [21.0277142s] [21.0277142s] END
	* E0310 21:05:53.293107       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:06:15.187778       1 trace.go:116] Trace[336122540]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:54.1636976 +0000 UTC m=+23.405543501) (total time: 21.0224514s):
	* Trace[336122540]: [21.0224514s] [21.0224514s] END
	* E0310 21:06:15.189266       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:06:15.712274       1 trace.go:116] Trace[646203300]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:54.6928835 +0000 UTC m=+23.934729401) (total time: 21.0177612s):
	* Trace[646203300]: [21.0177612s] [21.0177612s] END
	* E0310 21:06:15.712320       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I0310 21:06:15.712513       1 trace.go:116] Trace[1747278511]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-03-10 21:05:54.6917495 +0000 UTC m=+23.933595401) (total time: 21.0192089s):
	* Trace[1747278511]: [21.0192089s] [21.0192089s] END
	* E0310 21:06:15.712528       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20210310205202-6496
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=default-k8s-different-port-20210310205202-6496
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=4d52d607da107e3e4541e40b81376520ee87d4c2
	*                     minikube.k8s.io/name=default-k8s-different-port-20210310205202-6496
	*                     minikube.k8s.io/updated_at=2021_03_10T21_01_52_0700
	*                     minikube.k8s.io/version=v1.18.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Wed, 10 Mar 2021 21:00:46 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  default-k8s-different-port-20210310205202-6496
	*   AcquireTime:     <unset>
	*   RenewTime:       Wed, 10 Mar 2021 21:28:18 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Wed, 10 Mar 2021 21:28:00 +0000   Wed, 10 Mar 2021 21:00:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Wed, 10 Mar 2021 21:28:00 +0000   Wed, 10 Mar 2021 21:00:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Wed, 10 Mar 2021 21:28:00 +0000   Wed, 10 Mar 2021 21:00:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Wed, 10 Mar 2021 21:28:00 +0000   Wed, 10 Mar 2021 21:03:41 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.9
	*   Hostname:    default-k8s-different-port-20210310205202-6496
	* Capacity:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                4
	*   ephemeral-storage:  65792556Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             20481980Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 84fb46bd39d2483a97ab4430ee4a5e3a
	*   System UUID:                08addf25-0ddf-4c24-98ff-7ed3332985b4
	*   Boot ID:                    1e43cb90-c73a-415b-9855-33dabbdc5a83
	*   Kernel Version:             4.19.121-linuxkit
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.3
	*   Kubelet Version:            v1.20.2
	*   Kube-Proxy Version:         v1.20.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (8 in total)
	*   Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	*   default                     busybox                                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	*   kube-system                 coredns-74ff55c5b-dqrb4                                                   100m (2%)     0 (0%)      70Mi (0%)        170Mi (0%)     25m
	*   kube-system                 etcd-default-k8s-different-port-20210310205202-6496                       100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         27m
	*   kube-system                 kube-apiserver-default-k8s-different-port-20210310205202-6496             250m (6%)     0 (0%)      0 (0%)           0 (0%)         24m
	*   kube-system                 kube-controller-manager-default-k8s-different-port-20210310205202-6496    200m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	*   kube-system                 kube-proxy-j2jg9                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	*   kube-system                 kube-scheduler-default-k8s-different-port-20210310205202-6496             100m (2%)     0 (0%)      0 (0%)           0 (0%)         24m
	*   kube-system                 storage-provisioner                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (18%)  0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age   From        Message
	*   ----    ------                   ----  ----        -------
	*   Normal  Starting                 26m   kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  25m   kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    25m   kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     25m   kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             25m   kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  25m   kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                24m   kubelet     Node default-k8s-different-port-20210310205202-6496 status is now: NodeReady
	*   Normal  Starting                 22m   kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000006]  __hrtimer_run_queues+0x117/0x1c4
	* [  +0.000004]  ? ktime_get_update_offsets_now+0x36/0x95
	* [  +0.000002]  hrtimer_interrupt+0x92/0x165
	* [  +0.000004]  hv_stimer0_isr+0x20/0x2d
	* [  +0.000008]  hv_stimer0_vector_handler+0x3b/0x57
	* [  +0.000010]  hv_stimer0_callback_vector+0xf/0x20
	* [  +0.000001]  </IRQ>
	* [  +0.000002] RIP: 0010:native_safe_halt+0x7/0x8
	* [  +0.000002] Code: 60 02 df f0 83 44 24 fc 00 48 8b 00 a8 08 74 0b 65 81 25 dd ce 6f 71 ff ff ff 7f c3 e8 ce e6 72 ff f4 c3 e8 c7 e6 72 ff fb f4 <c3> 0f 1f 44 00 00 53 e8 69 0e 82 ff 65 8b 35 83 64 6f 71 31 ff e8
	* [  +0.000001] RSP: 0018:ffffffff8f203eb0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff12
	* [  +0.000002] RAX: ffffffff8e918b30 RBX: 0000000000000000 RCX: ffffffff8f253150
	* [  +0.000001] RDX: 000000000012167e RSI: 0000000000000000 RDI: 0000000000000001
	* [  +0.000001] RBP: 0000000000000000 R08: 00000066a1710248 R09: 0000006be2541d3e
	* [  +0.000001] R10: ffff9130ad802288 R11: 0000000000000000 R12: 0000000000000000
	* [  +0.000001] R13: ffffffff8f215780 R14: 00000000f6d76244 R15: 0000000000000000
	* [  +0.000002]  ? __sched_text_end+0x1/0x1
	* [  +0.000011]  default_idle+0x1b/0x2c
	* [  +0.000001]  do_idle+0xe5/0x216
	* [  +0.000003]  cpu_startup_entry+0x6f/0x71
	* [  +0.000003]  start_kernel+0x4f6/0x514
	* [  +0.000006]  secondary_startup_64+0xa4/0xb0
	* [  +0.000006] ---[ end trace 8aa9ce4b885e8e86 ]---
	* [ +25.977799] hrtimer: interrupt took 3356400 ns
	* [Mar10 19:08] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* [Mar10 19:49] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	* 
	* ==> etcd [69efae781c0b] <==
	* 2021-03-10 21:27:26.554762 W | etcdserver: read-only range request "key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true " with result "range_response_count:0 size:5" took too long (191.3279ms) to execute
	* 2021-03-10 21:27:32.794492 W | etcdserver: request "header:<ID:11303041234760733094 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1149 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1056 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (208.7231ms) to execute
	* 2021-03-10 21:27:32.835432 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-node-lease\" " with result "range_response_count:1 size:271" took too long (148.466ms) to execute
	* 2021-03-10 21:27:32.871246 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (127.6732ms) to execute
	* 2021-03-10 21:27:33.510669 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (308.8694ms) to execute
	* 2021-03-10 21:27:33.512911 W | etcdserver: read-only range request "key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true " with result "range_response_count:0 size:5" took too long (274.5584ms) to execute
	* 2021-03-10 21:27:33.513631 W | etcdserver: request "header:<ID:11303041234760733105 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:1147 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905957292 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (198.1095ms) to execute
	* 2021-03-10 21:27:33.536823 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:27:35.719629 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:27:38.696808 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true " with result "range_response_count:0 size:7" took too long (115.0471ms) to execute
	* 2021-03-10 21:27:40.368953 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (195.0307ms) to execute
	* 2021-03-10 21:27:41.062399 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1145" took too long (157.7882ms) to execute
	* 2021-03-10 21:27:45.117704 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:27:55.943222 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:28:04.536311 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (122.5056ms) to execute
	* 2021-03-10 21:28:04.536982 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:118" took too long (907.7178ms) to execute
	* 2021-03-10 21:28:05.803460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:28:08.879516 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (131.4811ms) to execute
	* 2021-03-10 21:28:14.919037 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:28:15.135114 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (124.6446ms) to execute
	* 2021-03-10 21:28:15.137577 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true " with result "range_response_count:0 size:5" took too long (158.3468ms) to execute
	* 2021-03-10 21:28:23.305017 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" " with result "range_response_count:81 size:71655" took too long (129.5518ms) to execute
	* 2021-03-10 21:28:23.316379 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1145" took too long (141.1216ms) to execute
	* 2021-03-10 21:28:25.398961 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2021-03-10 21:28:32.001826 W | etcdserver: request "header:<ID:11303041234760733324 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1187 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1056 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (135.6895ms) to execute
	* 
	* ==> kernel <==
	*  21:28:34 up  2:28,  0 users,  load average: 158.08, 145.10, 142.16
	* Linux default-k8s-different-port-20210310205202-6496 4.19.121-linuxkit #1 SMP Tue Dec 1 17:50:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [44043b6a8198] <==
	* I0310 21:27:26.836776       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 21:27:32.368331       1 trace.go:205] Trace[474014869]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.20.2 (linux/amd64) kubernetes/faecb19,client:127.0.0.1 (10-Mar-2021 21:27:31.835) (total time: 532ms):
	* Trace[474014869]: ---"About to write a response" 532ms (21:27:00.368)
	* Trace[474014869]: [532.879ms] [532.879ms] END
	* I0310 21:27:33.520881       1 trace.go:205] Trace[1361331873]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:27:32.957) (total time: 563ms):
	* Trace[1361331873]: ---"Transaction prepared" 208ms (21:27:00.280)
	* Trace[1361331873]: ---"Transaction committed" 239ms (21:27:00.520)
	* Trace[1361331873]: [563.0446ms] [563.0446ms] END
	* I0310 21:27:40.970607       1 trace.go:205] Trace[1253262480]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Mar-2021 21:27:40.415) (total time: 554ms):
	* Trace[1253262480]: ---"Transaction prepared" 383ms (21:27:00.876)
	* Trace[1253262480]: [554.6174ms] [554.6174ms] END
	* I0310 21:27:41.390329       1 trace.go:205] Trace[858453511]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.9 (10-Mar-2021 21:27:40.846) (total time: 541ms):
	* Trace[858453511]: ---"About to write a response" 541ms (21:27:00.390)
	* Trace[858453511]: [541.7329ms] [541.7329ms] END
	* I0310 21:28:10.223544       1 client.go:360] parsed scheme: "passthrough"
	* I0310 21:28:10.223700       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I0310 21:28:10.223741       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I0310 21:28:23.544467       1 trace.go:205] Trace[334433536]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.9 (10-Mar-2021 21:28:23.034) (total time: 509ms):
	* Trace[334433536]: ---"About to write a response" 509ms (21:28:00.543)
	* Trace[334433536]: [509.4142ms] [509.4142ms] END
	* I0310 21:28:23.855747       1 trace.go:205] Trace[40488954]: "List etcd3" key:/events,resourceVersion:,resourceVersionMatch:,limit:0,continue: (10-Mar-2021 21:28:22.814) (total time: 1040ms):
	* Trace[40488954]: [1.0406758s] [1.0406758s] END
	* I0310 21:28:23.884530       1 trace.go:205] Trace[695360149]: "List" url:/api/v1/events,user-agent:kubectl/v1.20.2 (linux/amd64) kubernetes/faecb19,client:127.0.0.1 (10-Mar-2021 21:28:22.814) (total time: 1069ms):
	* Trace[695360149]: ---"Listing from storage done" 1040ms (21:28:00.855)
	* Trace[695360149]: [1.0695135s] [1.0695135s] END
	* 
	* ==> kube-controller-manager [92f2244695b6] <==
	* 	/usr/local/go/src/net/net.go:182 +0x8e
	* crypto/tls.(*atLeastReader).Read(0xc000d7c920, 0xc000daf8c0, 0x205, 0x205, 0x40, 0x45, 0xc000da9130)
	* 	/usr/local/go/src/crypto/tls/conn.go:779 +0x62
	* bytes.(*Buffer).ReadFrom(0xc00045d780, 0x4d9ed80, 0xc000d7c920, 0x40bd05, 0x3f475a0, 0x464b8a0)
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xb1
	* crypto/tls.(*Conn).readFromUntil(0xc00045d500, 0x4da5040, 0xc00000e9d0, 0x5, 0xc00000e9d0, 0xc000da9238)
	* 	/usr/local/go/src/crypto/tls/conn.go:801 +0xf3
	* crypto/tls.(*Conn).readRecordOrCCS(0xc00045d500, 0xc000da9600, 0x6143d7, 0xc000dce480)
	* 	/usr/local/go/src/crypto/tls/conn.go:608 +0x115
	* crypto/tls.(*Conn).readRecord(...)
	* 	/usr/local/go/src/crypto/tls/conn.go:576
	* crypto/tls.(*Conn).readHandshake(0xc00045d500, 0xc000058000, 0xc000da9768, 0x48e91b, 0x48c4fa)
	* 	/usr/local/go/src/crypto/tls/conn.go:992 +0x6d
	* crypto/tls.(*serverHandshakeStateTLS13).readClientCertificate(0xc000da9aa0, 0x8e6, 0x0)
	* 	/usr/local/go/src/crypto/tls/handshake_server_tls13.go:770 +0x170
	* crypto/tls.(*serverHandshakeStateTLS13).handshake(0xc000da9aa0, 0xc000dac400, 0x0)
	* 	/usr/local/go/src/crypto/tls/handshake_server_tls13.go:71 +0x12a
	* crypto/tls.(*Conn).serverHandshake(0xc00045d500, 0xc000d4ad60, 0xf)
	* 	/usr/local/go/src/crypto/tls/handshake_server.go:50 +0xbc
	* crypto/tls.(*Conn).Handshake(0xc00045d500, 0x0, 0x0)
	* 	/usr/local/go/src/crypto/tls/conn.go:1362 +0xc9
	* net/http.(*conn).serve(0xc000da2320, 0x4e10da0, 0xc001044120)
	* 	/usr/local/go/src/net/http/server.go:1817 +0x1a5
	* created by net/http.(*Server).Serve
	* 	/usr/local/go/src/net/http/server.go:2969 +0x36c
	* 
	* ==> kube-controller-manager [bf37cfa32c85] <==
	* I0310 21:02:23.191614       1 shared_informer.go:247] Caches are synced for attach detach 
	* I0310 21:02:23.258138       1 shared_informer.go:247] Caches are synced for stateful set 
	* I0310 21:02:23.258194       1 shared_informer.go:247] Caches are synced for endpoint 
	* I0310 21:02:23.305391       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	* I0310 21:02:23.351277       1 shared_informer.go:247] Caches are synced for disruption 
	* I0310 21:02:23.374187       1 disruption.go:339] Sending events to api server.
	* I0310 21:02:23.374161       1 shared_informer.go:247] Caches are synced for deployment 
	* I0310 21:02:23.394635       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:02:23.394684       1 shared_informer.go:247] Caches are synced for resource quota 
	* I0310 21:02:23.504250       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	* I0310 21:02:24.704508       1 range_allocator.go:373] Set node default-k8s-different-port-20210310205202-6496 PodCIDR to [10.244.0.0/24]
	* I0310 21:02:26.031533       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I0310 21:02:29.347666       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:02:29.389687       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I0310 21:02:29.389720       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0310 21:02:32.693450       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	* E0310 21:02:33.459579       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	* I0310 21:02:34.596622       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-j2jg9"
	* I0310 21:02:36.319712       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-ghd59"
	* I0310 21:02:37.223826       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-dqrb4"
	* E0310 21:02:37.264023       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"61730e88-17b1-4e14-b6aa-8324d9c0be38", ResourceVersion:"289", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751006911, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0014008c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014008e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v
1.LabelSelector)(0xc001400900), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.
GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0013614c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014
00920), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolum
eSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001400940), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil
), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001400980)}}, Resources:v1
.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0011588a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), Restart
Policy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00106f8f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005dfdc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runti
meClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000efc8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00106f958)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* I0310 21:02:48.126395       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I0310 21:02:48.838406       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	* I0310 21:02:50.262753       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-ghd59"
	* I0310 21:03:48.218630       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-proxy [6cc1ac0f0822] <==
	* Trace[294614200]: [2.6555067s] [2.6555067s] END
	* I0310 21:15:02.786797       1 trace.go:205] Trace[480409953]: "iptables restore" (10-Mar-2021 21:15:00.105) (total time: 2680ms):
	* Trace[480409953]: [2.6809324s] [2.6809324s] END
	* I0310 21:15:25.164398       1 trace.go:205] Trace[744657260]: "iptables restore" (10-Mar-2021 21:15:23.094) (total time: 2069ms):
	* Trace[744657260]: [2.0694799s] [2.0694799s] END
	* I0310 21:16:19.913067       1 trace.go:205] Trace[2050273892]: "iptables save" (10-Mar-2021 21:16:17.436) (total time: 2454ms):
	* Trace[2050273892]: [2.4541013s] [2.4541013s] END
	* I0310 21:16:23.921332       1 trace.go:205] Trace[926147253]: "iptables Monitor CANARY check" (10-Mar-2021 21:16:19.913) (total time: 3993ms):
	* Trace[926147253]: [3.9931075s] [3.9931075s] END
	* I0310 21:17:05.619174       1 trace.go:205] Trace[137995289]: "iptables save" (10-Mar-2021 21:17:00.777) (total time: 4841ms):
	* Trace[137995289]: [4.8418486s] [4.8418486s] END
	* I0310 21:17:08.257883       1 trace.go:205] Trace[830314942]: "iptables save" (10-Mar-2021 21:17:05.619) (total time: 2625ms):
	* Trace[830314942]: [2.6251377s] [2.6251377s] END
	* I0310 21:17:13.918923       1 trace.go:205] Trace[1714446172]: "iptables restore" (10-Mar-2021 21:17:08.304) (total time: 5618ms):
	* Trace[1714446172]: [5.6189044s] [5.6189044s] END
	* I0310 21:17:47.517170       1 trace.go:205] Trace[2066387782]: "iptables restore" (10-Mar-2021 21:17:44.317) (total time: 3199ms):
	* Trace[2066387782]: [3.199319s] [3.199319s] END
	* I0310 21:18:51.451943       1 trace.go:205] Trace[692697592]: "iptables Monitor CANARY check" (10-Mar-2021 21:18:47.227) (total time: 3881ms):
	* Trace[692697592]: [3.8815133s] [3.8815133s] END
	* I0310 21:21:21.772524       1 trace.go:205] Trace[1675863731]: "iptables Monitor CANARY check" (10-Mar-2021 21:21:17.226) (total time: 4545ms):
	* Trace[1675863731]: [4.5456984s] [4.5456984s] END
	* I0310 21:22:52.179734       1 trace.go:205] Trace[445590362]: "iptables Monitor CANARY check" (10-Mar-2021 21:22:47.325) (total time: 4834ms):
	* Trace[445590362]: [4.8343575s] [4.8343575s] END
	* I0310 21:27:23.478915       1 trace.go:205] Trace[51671675]: "iptables Monitor CANARY check" (10-Mar-2021 21:27:21.238) (total time: 2240ms):
	* Trace[51671675]: [2.2401963s] [2.2401963s] END
	* 
	* ==> kube-scheduler [cc170dc9a3a5] <==
	* E0310 21:00:55.467249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:00:55.496015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:00:55.518162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 21:00:55.997465       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:00:56.099435       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:00:56.403617       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 21:00:56.953691       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 21:01:02.276457       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0310 21:01:02.730159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E0310 21:01:02.737344       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0310 21:01:04.083495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E0310 21:01:04.099290       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:01:04.109238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0310 21:01:04.756195       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0310 21:01:04.793252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0310 21:01:05.070358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E0310 21:01:05.637368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0310 21:01:07.793619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0310 21:01:07.977719       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0310 21:01:19.616631       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I0310 21:02:07.860900       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* http2: server: error reading preface from client 127.0.0.1:33206: read tcp 127.0.0.1:10259->127.0.0.1:33206: read: connection reset by peer
	* I0310 21:16:40.096685       1 trace.go:205] Trace[247781435]: "Scheduling" namespace:default,name:busybox (10-Mar-2021 21:16:39.969) (total time: 123ms):
	* Trace[247781435]: ---"Computing predicates done" 121ms (21:16:00.092)
	* Trace[247781435]: [123.0195ms] [123.0195ms] END
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-03-10 20:52:50 UTC, end at Wed 2021-03-10 21:29:03 UTC. --
	* Mar 10 21:20:14 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:20:14.072710    3401 pod_container_deletor.go:79] Container "0740d9d8e0c7eb6f5e404e682cef855a73b5d22bdcea215760d41bdbfdbae962" not found in pod's containers
	* Mar 10 21:20:20 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:20:20.101744    3401 remote_image.go:113] PullImage "busybox:1.28.4-glibc" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout
	* Mar 10 21:20:20 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:20:20.190301    3401 kuberuntime_image.go:51] Pull image "busybox:1.28.4-glibc" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout
	* Mar 10 21:20:20 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:20:20.331369    3401 kuberuntime_manager.go:829] container &Container{Name:busybox,Image:busybox:1.28.4-glibc,Command:[sleep 3600],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tldgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod busybox_default(ee642075-256f-4ee8-896d-23e79a3cd1a6): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/:
net/http: TLS handshake timeout
	* Mar 10 21:20:20 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:20:20.332038    3401 pod_workers.go:191] Error syncing pod ee642075-256f-4ee8-896d-23e79a3cd1a6 ("busybox_default(ee642075-256f-4ee8-896d-23e79a3cd1a6)"), skipping: failed to "StartContainer" for "busybox" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout"
	* Mar 10 21:20:45 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:20:45.518208    3401 trace.go:205] Trace[629366232]: "iptables Monitor CANARY check" (10-Mar-2021 21:20:36.168) (total time: 9350ms):
	* Mar 10 21:20:45 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[629366232]: [9.350317s] [9.350317s] END
	* Mar 10 21:21:28 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:21:28.346636    3401 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8444/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-different-port-20210310205202-6496?timeout=10s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	* Mar 10 21:21:31 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:21:31.560679    3401 controller.go:187] failed to update lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "default-k8s-different-port-20210310205202-6496": the object has been modified; please apply your changes to the latest version and try again
	* Mar 10 21:21:42 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:21:42.477994    3401 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod5750970b-b6e6-4283-839d-d9eaddeb5c46/b1ac8f2ee561da93436846ae5a4a8495eac5f48b211e22ea2a65fb90987f1b3a": RecentStats: unable to find data in memory cache]
	* Mar 10 21:22:03 default-k8s-different-port-20210310205202-6496 kubelet[3401]: E0310 21:22:03.717140    3401 cadvisor_stats_provider.go:401] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/docker/0a23063c9caf23247a91e5aecf6bc099b6cf7bb25b976b7b1cd7fb301a122b63/kubepods/besteffort/pod5750970b-b6e6-4283-839d-d9eaddeb5c46/b1ac8f2ee561da93436846ae5a4a8495eac5f48b211e22ea2a65fb90987f1b3a": RecentStats: unable to find data in memory cache]
	* Mar 10 21:22:03 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:22:03.958840    3401 scope.go:95] [topologymanager] RemoveContainer - Container ID: c4f9eb4e103c512fa7f0880b5778e4450fffb041bd1aa1e73f14c271ebf1d6d6
	* Mar 10 21:22:04 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:22:04.020587    3401 scope.go:95] [topologymanager] RemoveContainer - Container ID: b1ac8f2ee561da93436846ae5a4a8495eac5f48b211e22ea2a65fb90987f1b3a
	* Mar 10 21:22:40 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:22:40.729246    3401 trace.go:205] Trace[689233256]: "iptables Monitor CANARY check" (10-Mar-2021 21:22:34.398) (total time: 6330ms):
	* Mar 10 21:22:40 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[689233256]: [6.3309429s] [6.3309429s] END
	* Mar 10 21:23:02 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:23:02.840783    3401 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
	* Mar 10 21:23:24 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:23:24.440210    3401 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 21:23:24 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:23:24.466259    3401 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* Mar 10 21:23:37 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:23:37.867444    3401 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/busybox through plugin: invalid network status for
	* Mar 10 21:26:41 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:26:41.887683    3401 trace.go:205] Trace[1027375460]: "iptables Monitor CANARY check" (10-Mar-2021 21:26:34.298) (total time: 7588ms):
	* Mar 10 21:26:41 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[1027375460]: [7.5888571s] [7.5888571s] END
	* Mar 10 21:27:37 default-k8s-different-port-20210310205202-6496 kubelet[3401]: I0310 21:27:37.119904    3401 trace.go:205] Trace[1834566402]: "iptables Monitor CANARY check" (10-Mar-2021 21:27:34.218) (total time: 2901ms):
	* Mar 10 21:27:37 default-k8s-different-port-20210310205202-6496 kubelet[3401]: Trace[1834566402]: [2.9015611s] [2.9015611s] END
	* Mar 10 21:28:28 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:28:28.673872    3401 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	* Mar 10 21:28:28 default-k8s-different-port-20210310205202-6496 kubelet[3401]: W0310 21:28:28.675742    3401 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	* 
	* ==> storage-provisioner [b1ac8f2ee561] <==
	* I0310 21:19:29.205469       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 21:19:34.831660       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 21:19:34.837310       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 21:19:51.671120       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 21:19:51.736026       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c108f6b-3b35-40a4-8699-f723e5b7fdae", APIVersion:"v1", ResourceVersion:"916", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210310205202-6496_48a0dbaa-00ac-4c4f-bda6-bf215ce69226 became leader
	* I0310 21:19:51.740705       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210310205202-6496_48a0dbaa-00ac-4c4f-bda6-bf215ce69226!
	* I0310 21:19:54.789719       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210310205202-6496_48a0dbaa-00ac-4c4f-bda6-bf215ce69226!
	* I0310 21:21:24.970379       1 leaderelection.go:288] failed to renew lease kube-system/k8s.io-minikube-hostpath: failed to tryAcquireOrRenew context deadline exceeded
	* F0310 21:21:24.970473       1 controller.go:877] leaderelection lost
	* 
	* ==> storage-provisioner [d9ef8592e56c] <==
	* I0310 21:22:27.332180       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I0310 21:22:28.363481       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I0310 21:22:28.363633       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I0310 21:22:51.732341       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I0310 21:22:51.862141       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210310205202-6496_86c78217-062d-4b0b-954c-3f969e81226b!
	* I0310 21:22:52.158460       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c108f6b-3b35-40a4-8699-f723e5b7fdae", APIVersion:"v1", ResourceVersion:"997", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20210310205202-6496_86c78217-062d-4b0b-954c-3f969e81226b became leader
	* I0310 21:23:00.488906       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20210310205202-6496_86c78217-062d-4b0b-954c-3f969e81226b!
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| Command |                      Args                      |                    Profile                     |          User           | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                             | disable-driver-mounts-20210310205156-6496      | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:51:57 GMT | Wed, 10 Mar 2021 20:52:02 GMT |
	|         | disable-driver-mounts-20210310205156-6496      |                                                |                         |         |                               |                               |
	| -p      | force-systemd-flag-20210310203447-6496         | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:53:03 GMT | Wed, 10 Mar 2021 20:53:44 GMT |
	|         | ssh docker info --format                       |                                                |                         |         |                               |                               |
	|         |                               |                                                |                         |         |                               |                               |
	| delete  | -p                                             | force-systemd-flag-20210310203447-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:54:07 GMT | Wed, 10 Mar 2021 20:54:36 GMT |
	|         | force-systemd-flag-20210310203447-6496         |                                                |                         |         |                               |                               |
	| stop    | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:19 GMT | Wed, 10 Mar 2021 21:02:40 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:02:42 GMT | Wed, 10 Mar 2021 21:02:42 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496                | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:07:05 GMT | Wed, 10 Mar 2021 21:08:33 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| start   | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 20:52:21 GMT | Wed, 10 Mar 2021 21:09:23 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                |                                                |                         |         |                               |                               |
	|         | -v=1 --driver=docker                           |                                                |                         |         |                               |                               |
	| logs    | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:09:23 GMT | Wed, 10 Mar 2021 21:10:51 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | stopped-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:10:52 GMT | Wed, 10 Mar 2021 21:11:13 GMT |
	|         | stopped-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | running-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:45 GMT | Wed, 10 Mar 2021 21:12:11 GMT |
	|         | running-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| stop    | -p                                             | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:12:38 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:40 GMT | Wed, 10 Mar 2021 21:12:41 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	| -p      | kubernetes-upgrade-20210310201637-6496         | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:11:50 GMT | Wed, 10 Mar 2021 21:15:02 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | kubernetes-upgrade-20210310201637-6496         | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:15 GMT | Wed, 10 Mar 2021 21:15:46 GMT |
	|         | kubernetes-upgrade-20210310201637-6496         |                                                |                         |         |                               |                               |
	| delete  | -p                                             | missing-upgrade-20210310201637-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:38 GMT | Wed, 10 Mar 2021 21:16:03 GMT |
	|         | missing-upgrade-20210310201637-6496            |                                                |                         |         |                               |                               |
	| -p      | default-k8s-different-port-20210310205202-6496 | default-k8s-different-port-20210310205202-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:12:03 GMT | Wed, 10 Mar 2021 21:16:15 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| stop    | -p                                             | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:15:57 GMT | Wed, 10 Mar 2021 21:16:31 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	|         | --alsologtostderr -v=3                         |                                                |                         |         |                               |                               |
	| addons  | enable dashboard -p                            | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:16:33 GMT | Wed, 10 Mar 2021 21:16:34 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	| delete  | -p                                             | old-k8s-version-20210310204459-6496            | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:18:53 GMT | Wed, 10 Mar 2021 21:19:16 GMT |
	|         | old-k8s-version-20210310204459-6496            |                                                |                         |         |                               |                               |
	| delete  | -p                                             | no-preload-20210310204947-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:20:59 GMT | Wed, 10 Mar 2021 21:21:26 GMT |
	|         | no-preload-20210310204947-6496                 |                                                |                         |         |                               |                               |
	| -p      | embed-certs-20210310205017-6496                | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:20:59 GMT | Wed, 10 Mar 2021 21:24:36 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | embed-certs-20210310205017-6496                | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:24:57 GMT | Wed, 10 Mar 2021 21:25:18 GMT |
	|         | embed-certs-20210310205017-6496                |                                                |                         |         |                               |                               |
	| -p      | default-k8s-different-port-20210310205202-6496 | default-k8s-different-port-20210310205202-6496 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:23:55 GMT | Wed, 10 Mar 2021 21:25:58 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| -p      | newest-cni-20210310205436-6496                 | newest-cni-20210310205436-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:24:59 GMT | Wed, 10 Mar 2021 21:27:40 GMT |
	|         | logs -n 25                                     |                                                |                         |         |                               |                               |
	| delete  | -p                                             | newest-cni-20210310205436-6496                 | WINDOWS-SERVER-\jenkins | v1.18.1 | Wed, 10 Mar 2021 21:27:55 GMT | Wed, 10 Mar 2021 21:28:16 GMT |
	|         | newest-cni-20210310205436-6496                 |                                                |                         |         |                               |                               |
	|---------|------------------------------------------------|------------------------------------------------|-------------------------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/03/10 21:28:17
	* Running on machine: windows-server-1
	* Binary: Built with gc go1.16 for windows/amd64
	* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	* I0310 21:28:17.454141    8404 out.go:239] Setting OutFile to fd 2924 ...
	* I0310 21:28:17.455133    8404 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:28:17.455133    8404 out.go:252] Setting ErrFile to fd 2496...
	* I0310 21:28:17.455133    8404 out.go:286] TERM=,COLORTERM=, which probably does not support color
	* I0310 21:28:17.474215    8404 out.go:246] Setting JSON to false
	* I0310 21:28:17.477147    8404 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":37163,"bootTime":1615374534,"procs":116,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	* W0310 21:28:17.477147    8404 start.go:116] gopshost.Virtualization returned error: not implemented yet
	* I0310 21:28:17.483010    8404 out.go:129] * [bridge-20210310212817-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	* I0310 21:28:13.983580    9020 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.2947341s)
	* I0310 21:28:13.998941    9020 ssh_runner.go:149] Run: sudo systemctl restart docker
	* I0310 21:28:18.047812   12868 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format=: (4.7696888s)
	* I0310 21:28:18.047812   12868 logs.go:255] 1 containers: [c163207f6927]
	* I0310 21:28:18.047812   12868 logs.go:122] Gathering logs for describe nodes ...
	* I0310 21:28:18.047812   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	* I0310 21:28:17.490306    8404 out.go:129]   - MINIKUBE_LOCATION=10722
	* I0310 21:28:17.495840    8404 driver.go:323] Setting default libvirt URI to qemu:///system
	* I0310 21:28:18.041482    8404 docker.go:119] docker version: linux-20.10.2
	* I0310 21:28:18.047812    8404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:28:19.060122    8404 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0123141s)
	* I0310 21:28:19.061488    8404 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:28:18.5748442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:28:18.635504    7648 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (11.2050955s)
	* I0310 21:28:18.635904    7648 kubeadm.go:995] duration metric: took 42.2391198s to wait for elevateKubeSystemPrivileges.
	* I0310 21:28:18.636346    7648 kubeadm.go:387] StartCluster complete in 7m57.1812805s
	* I0310 21:28:18.636552    7648 settings.go:142] acquiring lock: {Name:mk153ab5d002fd4991700e22f3eda9a43ee295f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:28:18.637173    7648 settings.go:150] Updating kubeconfig:  C:\Users\jenkins/.kube/config
	* I0310 21:28:18.644567    7648 lock.go:36] WriteFile acquiring C:\Users\jenkins/.kube/config: {Name:mk815881434fcb75cb1a8bc7f2c24cf067660bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:28:19.791142    7648 kapi.go:233] deployment "coredns" in namespace "kube-system" and context "cilium-20210310211546-6496" rescaled to 1
	* I0310 21:28:19.791435    7648 start.go:203] Will wait 5m0s for node up to 
	* I0310 21:28:19.065113    8404 out.go:129] * Using the docker driver based on user configuration
	* I0310 21:28:19.065483    8404 start.go:276] selected driver: docker
	* I0310 21:28:19.065483    8404 start.go:718] validating driver "docker" against <nil>
	* I0310 21:28:19.065811    8404 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	* I0310 21:28:20.229933    8404 out.go:129] 
	* W0310 21:28:20.231002    8404 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	* W0310 21:28:20.231520    8404 out.go:191] * Suggestion: 
	* 
	*     1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	*     2. Click "Settings"
	*     3. Click "Resources"
	*     4. Increase "Memory" slider bar to 2.25 GB or higher
	*     5. Click "Apply & Restart"
	* W0310 21:28:20.231869    8404 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* I0310 21:28:19.791584    7648 addons.go:381] enableAddons start: toEnable=map[], additional=[]
	* I0310 21:28:19.793131    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 21:28:19.793131    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 21:28:19.793299    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:28:19.793643    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 21:28:19.793478    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 21:28:19.809178    7648 out.go:129] * Verifying Kubernetes components...
	* I0310 21:28:19.812851    7648 addons.go:58] Setting default-storageclass=true in profile "cilium-20210310211546-6496"
	* I0310 21:28:19.812851    7648 addons.go:58] Setting storage-provisioner=true in profile "cilium-20210310211546-6496"
	* I0310 21:28:19.813288    7648 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "cilium-20210310211546-6496"
	* I0310 21:28:19.813288    7648 addons.go:134] Setting addon storage-provisioner=true in "cilium-20210310211546-6496"
	* W0310 21:28:19.813631    7648 addons.go:143] addon storage-provisioner should already be in state true
	* I0310 21:28:19.814146    7648 host.go:66] Checking if "cilium-20210310211546-6496" exists ...
	* I0310 21:28:20.208892    7648 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	* I0310 21:28:20.497723    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format=
	* I0310 21:28:20.497723    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format=
	* I0310 21:28:20.863475    7648 cache.go:93] acquiring lock: {Name:mk30e0addf8d941e729fce2e9e6e58f4831fa9bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:20.863475    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 exists
	* I0310 21:28:20.864479    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210115023213-8464" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115023213-8464" took 1.0533736s
	* I0310 21:28:20.864479    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464 succeeded
	* I0310 21:28:20.891987    7648 cache.go:93] acquiring lock: {Name:mkab31196e3bf71b9c1e6a1e38e57ec6fb030bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:20.892546    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 exists
	* I0310 21:28:20.892546    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210220004129-7452" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210220004129-7452" took 1.081573s
	* I0310 21:28:20.892546    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452 succeeded
	* I0310 21:28:20.973828    7648 cache.go:93] acquiring lock: {Name:mk17b3617b8bc7c68f0fe3347037485ee44000e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:20.974892    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 exists
	* I0310 21:28:20.974892    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210225231842-5736" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210225231842-5736" took 1.1644215s
	* I0310 21:28:20.974892    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736 succeeded
	* I0310 21:28:21.026485    7648 cache.go:93] acquiring lock: {Name:mkf6f90f079186654799fde8101b48612aa6f339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.027420    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 exists
	* I0310 21:28:21.028326    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210212145109-352" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210212145109-352" took 1.2129546s
	* I0310 21:28:21.028326    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352 succeeded
	* I0310 21:28:21.050049    7648 cache.go:93] acquiring lock: {Name:mk634154e9c95d6e5b156154f097cbabdedf9f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.051052    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 exists
	* I0310 21:28:21.051052    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210301195830-5700" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210301195830-5700" took 1.2382055s
	* I0310 21:28:21.051052    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700 succeeded
	* I0310 21:28:21.105056    7648 cache.go:93] acquiring lock: {Name:mk413751f23d1919a2f2162501025c6af3a2ad81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.105056    7648 cache.go:93] acquiring lock: {Name:mkfbc537176e4a7054a8ff78a35c4c45ad4889d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.105056    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 exists
	* I0310 21:28:21.106535    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210310191609-6496" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310191609-6496" took 1.2952268s
	* I0310 21:28:21.106535    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496 succeeded
	* I0310 21:28:21.107045    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 exists
	* I0310 21:28:21.107532    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210106002159-6856" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106002159-6856" took 1.2913925s
	* I0310 21:28:21.107532    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 succeeded
	* I0310 21:28:21.175747    7648 cache.go:93] acquiring lock: {Name:mk6e311fb193a5d30b249afa7255673dd7fc56b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.176816    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 exists
	* I0310 21:28:21.176816    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210107002220-9088" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107002220-9088" took 1.3619951s
	* I0310 21:28:21.177705    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 succeeded
	* I0310 21:28:21.198918    7648 cache.go:93] acquiring lock: {Name:mk5795abf13cc8b7192a417aee0e32dee2b0467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.199496    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 exists
	* I0310 21:28:21.199946    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210126212539-5172" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210126212539-5172" took 1.382817s
	* I0310 21:28:21.200139    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172 succeeded
	* I0310 21:28:21.239322    7648 cache.go:93] acquiring lock: {Name:mk6cdb668632330066d74bea74662e26e6c7633f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.239965    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 exists
	* I0310 21:28:21.240351    7648 cache.go:93] acquiring lock: {Name:mk67b81c694fa10d152b7bddece57d430edf9ebf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.240732    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 exists
	* I0310 21:28:21.241718    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210106215525-1984" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106215525-1984" took 1.4135546s
	* I0310 21:28:21.241718    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 succeeded
	* I0310 21:28:21.241718    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210308233820-5396" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210308233820-5396" took 1.4149947s
	* I0310 21:28:21.245744    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396 succeeded
	* I0310 21:28:21.245744    7648 cache.go:93] acquiring lock: {Name:mkc9a1c11079e53fedb3439203deb8305be63b2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.245744    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 exists
	* I0310 21:28:21.248697    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210303214129-4588" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210303214129-4588" took 1.4333263s
	* I0310 21:28:21.248697    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588 succeeded
	* I0310 21:28:21.264962    7648 cache.go:93] acquiring lock: {Name:mkfe8ccab311cf6d2666a7508a8e979857b9770b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.266121    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 exists
	* I0310 21:28:21.266461    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210219145454-9520" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219145454-9520" took 1.4355182s
	* I0310 21:28:21.266608    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520 succeeded
	* I0310 21:28:21.295668    7648 cache.go:93] acquiring lock: {Name:mk0c64ba734a0cdbeae55b08bb0b1b6723a680c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.303984    7648 cache.go:93] acquiring lock: {Name:mka2d29141752ca0c15ce625b99d3e259a454634 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.304752    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 exists
	* I0310 21:28:21.305040    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 exists
	* I0310 21:28:21.305040    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210105233232-2512" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210105233232-2512" took 1.4739441s
	* I0310 21:28:21.305040    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 succeeded
	* I0310 21:28:21.305451    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210310083645-5040" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210310083645-5040" took 1.4702551s
	* I0310 21:28:21.305451    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040 succeeded
	* I0310 21:28:21.317376    7648 cache.go:93] acquiring lock: {Name:mk1b277a131d0149dc1f34c6a5df09591c284c3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.317376    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 exists
	* I0310 21:28:21.318355    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210128021318-232" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210128021318-232" took 1.504457s
	* I0310 21:28:21.318355    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232 succeeded
	* I0310 21:28:21.324181    7648 cache.go:93] acquiring lock: {Name:mkb552f0ca2d9ea9965feba56885295e4020632a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.325950    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 exists
	* I0310 21:28:21.326255    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210106011107-6492" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210106011107-6492" took 1.4956528s
	* I0310 21:28:21.326255    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 succeeded
	* I0310 21:28:21.349504    7648 cache.go:93] acquiring lock: {Name:mk9829358ec5b615719a34ef2b4c8c5314131bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.349638    7648 cache.go:93] acquiring lock: {Name:mk84b2a6095b735cf889c519b5874f080b2e195a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.350030    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 exists
	* I0310 21:28:21.350030    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210309234032-4944" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210309234032-4944" took 1.5157306s
	* I0310 21:28:21.350457    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944 succeeded
	* I0310 21:28:21.350592    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 exists
	* I0310 21:28:21.350592    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210219220622-3920" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210219220622-3920" took 1.5238696s
	* I0310 21:28:21.350592    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920 succeeded
	* I0310 21:28:21.372271    7648 cache.go:93] acquiring lock: {Name:mk3b31b5d9c66e58bae5a84d594af5a71c06fef6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.372980    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 exists
	* I0310 21:28:21.373359    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210114204234-6692" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210114204234-6692" took 1.5430907s
	* I0310 21:28:21.373359    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692 succeeded
	* I0310 21:28:21.424965    7648 cache.go:93] acquiring lock: {Name:mk3f9eb5a6922e3da2b5e642fe1460b5c7a33453 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.425569    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 exists
	* I0310 21:28:21.425569    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210107190945-8748" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210107190945-8748" took 1.596691s
	* I0310 21:28:21.426211    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 succeeded
	* I0310 21:28:21.443796    7648 cache.go:93] acquiring lock: {Name:mkd8dd26dee4471c50a16459e3e56a843fbe7183 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.444310    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 exists
	* I0310 21:28:21.444577    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210120231122-7024" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120231122-7024" took 1.631732s
	* I0310 21:28:21.444788    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024 succeeded
	* I0310 21:28:21.455540    7648 cache.go:93] acquiring lock: {Name:mkcc9db267470950a8bd1fd66660e4d7ce7fb11a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.456286    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 exists
	* I0310 21:28:21.456817    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210120175851-7432" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120175851-7432" took 1.625409s
	* I0310 21:28:21.456817    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432 succeeded
	* I0310 21:28:21.460711    7648 cache.go:93] acquiring lock: {Name:mkb0cb73f942a657cd3f168830d30cb3598567a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.460923    7648 cache.go:93] acquiring lock: {Name:mkf74fc1bdd437dc31195924ffc024252ed6282c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.461366    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 exists
	* I0310 21:28:21.461580    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 exists
	* I0310 21:28:21.461794    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210306072141-12056" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210306072141-12056" took 1.626242s
	* I0310 21:28:21.462024    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056 succeeded
	* I0310 21:28:21.462691    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210304002630-1156" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304002630-1156" took 1.6333687s
	* I0310 21:28:21.462691    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156 succeeded
	* I0310 21:28:21.462911    7648 cache.go:93] acquiring lock: {Name:mk5aaf725ee95074b60d5acdb56999da11d0d967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.463336    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 exists
	* I0310 21:28:21.463550    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210213143925-7440" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210213143925-7440" took 1.6277182s
	* I0310 21:28:21.463550    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440 succeeded
	* I0310 21:28:21.494775    7648 cache.go:93] acquiring lock: {Name:mk5d79a216b121a22277fa476959e69d0268a006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.494993    7648 cache.go:93] acquiring lock: {Name:mkf96894dc732adcd1c856f98a56d65b2646f03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.495482    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 exists
	* I0310 21:28:21.495699    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 exists
	* I0310 21:28:21.496087    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210224014800-800" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210224014800-800" took 1.6693645s
	* I0310 21:28:21.496290    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800 succeeded
	* I0310 21:28:21.496087    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210115191024-3516" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210115191024-3516" took 1.6640083s
	* I0310 21:28:21.496290    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516 succeeded
	* I0310 21:28:21.511835    7648 cache.go:93] acquiring lock: {Name:mkad0f7b57f74c6c730129cb06800211b2e1dbab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.513325    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 exists
	* I0310 21:28:21.513325    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210120022529-1140" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120022529-1140" took 1.700044s
	* I0310 21:28:21.513750    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140 succeeded
	* I0310 21:28:21.515745    7648 cache.go:93] acquiring lock: {Name:mk74beba772a17b6c0792b37e1f3c84b8ae19a48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.516322    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 exists
	* I0310 21:28:21.516623    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210119220838-6552" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210119220838-6552" took 1.6994954s
	* I0310 21:28:21.516989    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552 succeeded
	* I0310 21:28:21.519352    7648 cache.go:93] acquiring lock: {Name:mkbc5485bf0e792523a58cf470a7622695547966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.519809    7648 cache.go:93] acquiring lock: {Name:mk5de4935501776b790bd29801e913c817cce9cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.519809    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 exists
	* I0310 21:28:21.520419    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 exists
	* I0310 21:28:21.520419    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210304184021-4052" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210304184021-4052" took 1.6852243s
	* I0310 21:28:21.520648    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052 succeeded
	* I0310 21:28:21.520648    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210123004019-5372" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210123004019-5372" took 1.6848169s
	* I0310 21:28:21.521508    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372 succeeded
	* I0310 21:28:21.523222    7648 cache.go:93] acquiring lock: {Name:mk6a939d4adc5b1a82c643cd3a34748a52c3e47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.524080    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 exists
	* I0310 21:28:21.524834    7648 cache.go:93] acquiring lock: {Name:mkd8c6f272dd5cb91af2d272705820baa75c5410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:21.525056    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210112045103-7160" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210112045103-7160" took 1.7109168s
	* I0310 21:28:21.526382    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 succeeded
	* I0310 21:28:21.525442    7648 cache.go:101] \\?\Volume{2649a8ec-5eec-4e29-9a61-c5b9938736e8}\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 exists
	* I0310 21:28:21.526973    7648 cache.go:82] cache image "minikube-local-cache-test:functional-20210120214442-10992" -> "C:\\Users\\jenkins\\.minikube\\cache\\images\\minikube-local-cache-test_functional-20210120214442-10992" took 1.6922581s
	* I0310 21:28:21.527211    7648 cache.go:66] save to tar file minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992 succeeded
	* I0310 21:28:21.527211    7648 cache.go:73] Successfully saved all images to host disk.
	* I0310 21:28:21.551952    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format=
	* I0310 21:28:21.966115    7648 cli_runner.go:168] Completed: docker container inspect cilium-20210310211546-6496 --format=: (1.4683977s)
	* I0310 21:28:22.001084    7648 cli_runner.go:168] Completed: docker container inspect cilium-20210310211546-6496 --format=: (1.5031826s)
	* I0310 21:28:20.276960    8404 out.go:129] 
	* I0310 21:28:20.510062    8404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:28:22.295184    8404 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.7851283s)
	* I0310 21:28:22.297684    8404 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:28:21.7324737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:28:22.299867    8404 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	* I0310 21:28:22.302930    8404 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	* I0310 21:28:22.304182    8404 cni.go:74] Creating CNI manager for "bridge"
	* I0310 21:28:22.304577    8404 start_flags.go:393] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	* I0310 21:28:22.305262    8404 start_flags.go:398] config:
	* {Name:bridge-20210310212817-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:bridge-20210310212817-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:28:22.311180    8404 out.go:129] * Starting control plane node bridge-20210310212817-6496 in cluster bridge-20210310212817-6496
	* I0310 21:28:23.055392    8404 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	* I0310 21:28:23.055753    8404 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	* I0310 21:28:23.056109    8404 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:28:23.056378    8404 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:28:23.056703    8404 cache.go:54] Caching tarball of preloaded images
	* I0310 21:28:23.056703    8404 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	* I0310 21:28:23.057121    8404 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	* I0310 21:28:23.057514    8404 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\bridge-20210310212817-6496\config.json ...
	* I0310 21:28:23.057514    8404 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\bridge-20210310212817-6496\config.json: {Name:mkf3da7fcce5a71fa371f0ede570b9d6e251d898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:28:23.075483    8404 cache.go:185] Successfully downloaded all kic artifacts
	* I0310 21:28:23.076381    8404 start.go:313] acquiring machines lock for bridge-20210310212817-6496: {Name:mkab762bb1cb286b28280dc4d674afca8cd539cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	* I0310 21:28:23.077010    8404 start.go:317] acquired machines lock for "bridge-20210310212817-6496" in 341.5??s
	* I0310 21:28:23.077198    8404 start.go:89] Provisioning new machine with config: &{Name:bridge-20210310212817-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:bridge-20210310212817-6496 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	* I0310 21:28:23.077470    8404 start.go:126] createHost starting for "" (driver="docker")
	* I0310 21:28:21.832144    9020 ssh_runner.go:189] Completed: sudo systemctl restart docker: (7.8328534s)
	* I0310 21:28:21.841638    9020 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:28:23.329449    9020 ssh_runner.go:189] Completed: docker images --format :: (1.4878165s)
	* I0310 21:28:23.329449    9020 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 21:28:23.329449    9020 cache_images.go:73] Images are preloaded, skipping loading
	* I0310 21:28:23.336859    9020 ssh_runner.go:149] Run: docker info --format 
	* I0310 21:28:22.018720    7648 out.go:129]   - Using image gcr.io/k8s-minikube/storage-provisioner:v4
	* I0310 21:28:22.028284    7648 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml
	* I0310 21:28:22.028284    7648 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	* I0310 21:28:22.049708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:28:22.258533    7648 ssh_runner.go:149] Run: docker images --format :
	* I0310 21:28:22.272536    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:28:22.535241    7648 ssh_runner.go:189] Completed: sudo systemctl is-active --quiet service kubelet: (2.3263576s)
	* I0310 21:28:22.551013    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:28:22.654920    7648 addons.go:134] Setting addon default-storageclass=true in "cilium-20210310211546-6496"
	* W0310 21:28:22.655355    7648 addons.go:143] addon default-storageclass should already be in state true
	* I0310 21:28:22.655908    7648 host.go:66] Checking if "cilium-20210310211546-6496" exists ...
	* I0310 21:28:22.678815    7648 cli_runner.go:115] Run: docker container inspect cilium-20210310211546-6496 --format=
	* I0310 21:28:22.824948    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:28:22.968631    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:28:23.331897    7648 pod_ready.go:36] extra waiting for kube-system core pods [kube-dns etcd kube-apiserver kube-controller-manager kube-proxy kube-scheduler] to be Ready ...
	* I0310 21:28:23.331897    7648 pod_ready.go:59] waiting 5m0s for pod with "kube-dns" label in "kube-system" namespace to be Ready ...
	* I0310 21:28:23.368194    7648 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml
	* I0310 21:28:23.368194    7648 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	* I0310 21:28:23.378006    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:28:24.055838    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:28:25.431107    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:23.089431    8404 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	* I0310 21:28:23.091076    8404 start.go:160] libmachine.API.Create for "bridge-20210310212817-6496" (driver="docker")
	* I0310 21:28:23.091363    8404 client.go:168] LocalClient.Create starting
	* I0310 21:28:23.092057    8404 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	* I0310 21:28:23.092347    8404 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:28:23.092347    8404 main.go:121] libmachine: Parsing certificate...
	* I0310 21:28:23.092905    8404 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	* I0310 21:28:23.093192    8404 main.go:121] libmachine: Decoding PEM data...
	* I0310 21:28:23.093192    8404 main.go:121] libmachine: Parsing certificate...
	* I0310 21:28:23.114718    8404 cli_runner.go:115] Run: docker network inspect bridge-20210310212817-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* W0310 21:28:23.750294    8404 cli_runner.go:162] docker network inspect bridge-20210310212817-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	* I0310 21:28:23.759480    8404 network_create.go:240] running [docker network inspect bridge-20210310212817-6496] to gather additional debugging logs...
	* I0310 21:28:23.759834    8404 cli_runner.go:115] Run: docker network inspect bridge-20210310212817-6496
	* W0310 21:28:24.354251    8404 cli_runner.go:162] docker network inspect bridge-20210310212817-6496 returned with exit code 1
	* I0310 21:28:24.354251    8404 network_create.go:243] error running [docker network inspect bridge-20210310212817-6496]: docker network inspect bridge-20210310212817-6496: exit status 1
	* stdout:
	* []
	* 
	* stderr:
	* Error: No such network: bridge-20210310212817-6496
	* I0310 21:28:24.354251    8404 network_create.go:245] output of [docker network inspect bridge-20210310212817-6496]: -- stdout --
	* []
	* 
	* -- /stdout --
	* ** stderr ** 
	* Error: No such network: bridge-20210310212817-6496
	* 
	* ** /stderr **
	* I0310 21:28:24.364515    8404 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	* I0310 21:28:25.053238    8404 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	* I0310 21:28:25.053432    8404 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: bridge-20210310212817-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	* I0310 21:28:25.060213    8404 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20210310212817-6496
	* W0310 21:28:25.712306    8404 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20210310212817-6496 returned with exit code 1
	* W0310 21:28:25.712937    8404 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	* I0310 21:28:25.730648    8404 cli_runner.go:115] Run: docker ps -a --format 
	* I0310 21:28:26.363494    8404 cli_runner.go:115] Run: docker volume create bridge-20210310212817-6496 --label name.minikube.sigs.k8s.io=bridge-20210310212817-6496 --label created_by.minikube.sigs.k8s.io=true
	* I0310 21:28:26.985127    8404 oci.go:102] Successfully created a docker volume bridge-20210310212817-6496
	* I0310 21:28:26.990104    8404 cli_runner.go:115] Run: docker run --rm --name bridge-20210310212817-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210310212817-6496 --entrypoint /usr/bin/test -v bridge-20210310212817-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	* I0310 21:28:25.640989    9020 ssh_runner.go:189] Completed: docker info --format : (2.3041387s)
	* I0310 21:28:25.640989    9020 cni.go:74] Creating CNI manager for "kindnet"
	* I0310 21:28:25.640989    9020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	* I0310 21:28:25.641973    9020 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.97 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20210310212518-6496 NodeName:kindnet-20210310212518-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.97 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	* I0310 21:28:25.641973    9020 kubeadm.go:154] kubeadm config:
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: InitConfiguration
	* localAPIEndpoint:
	*   advertiseAddress: 192.168.49.97
	*   bindPort: 8443
	* bootstrapTokens:
	*   - groups:
	*       - system:bootstrappers:kubeadm:default-node-token
	*     ttl: 24h0m0s
	*     usages:
	*       - signing
	*       - authentication
	* nodeRegistration:
	*   criSocket: /var/run/dockershim.sock
	*   name: "kindnet-20210310212518-6496"
	*   kubeletExtraArgs:
	*     node-ip: 192.168.49.97
	*   taints: []
	* ---
	* apiVersion: kubeadm.k8s.io/v1beta2
	* kind: ClusterConfiguration
	* apiServer:
	*   certSANs: ["127.0.0.1", "localhost", "192.168.49.97"]
	*   extraArgs:
	*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	* controllerManager:
	*   extraArgs:
	*     allocate-node-cidrs: "true"
	*     leader-elect: "false"
	* scheduler:
	*   extraArgs:
	*     leader-elect: "false"
	* certificatesDir: /var/lib/minikube/certs
	* clusterName: mk
	* controlPlaneEndpoint: control-plane.minikube.internal:8443
	* dns:
	*   type: CoreDNS
	* etcd:
	*   local:
	*     dataDir: /var/lib/minikube/etcd
	*     extraArgs:
	*       proxy-refresh-interval: "70000"
	* kubernetesVersion: v1.20.2
	* networking:
	*   dnsDomain: cluster.local
	*   podSubnet: "10.244.0.0/16"
	*   serviceSubnet: 10.96.0.0/12
	* ---
	* apiVersion: kubelet.config.k8s.io/v1beta1
	* kind: KubeletConfiguration
	* authentication:
	*   x509:
	*     clientCAFile: /var/lib/minikube/certs/ca.crt
	* cgroupDriver: cgroupfs
	* clusterDomain: "cluster.local"
	* # disable disk resource management by default
	* imageGCHighThresholdPercent: 100
	* evictionHard:
	*   nodefs.available: "0%"
	*   nodefs.inodesFree: "0%"
	*   imagefs.available: "0%"
	* failSwapOn: false
	* staticPodPath: /etc/kubernetes/manifests
	* ---
	* apiVersion: kubeproxy.config.k8s.io/v1alpha1
	* kind: KubeProxyConfiguration
	* clusterCIDR: "10.244.0.0/16"
	* metricsBindAddress: 0.0.0.0:10249
	* 
	* I0310 21:28:25.641973    9020 kubeadm.go:919] kubelet [Unit]
	* Wants=docker.socket
	* 
	* [Service]
	* ExecStart=
	* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20210310212518-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.97
	* 
	* [Install]
	*  config:
	* {KubernetesVersion:v1.20.2 ClusterName:kindnet-20210310212518-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	* I0310 21:28:25.650976    9020 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	* I0310 21:28:25.734667    9020 binaries.go:44] Found k8s binaries, skipping transfer
	* I0310 21:28:25.744427    9020 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	* I0310 21:28:25.809199    9020 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	* I0310 21:28:25.988857    9020 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	* I0310 21:28:26.221007    9020 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1862 bytes)
	* I0310 21:28:26.420174    9020 ssh_runner.go:149] Run: grep 192.168.49.97	control-plane.minikube.internal$ /etc/hosts
	* I0310 21:28:26.450477    9020 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:28:26.695060    9020 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496 for IP: 192.168.49.97
	* I0310 21:28:26.695791    9020 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	* I0310 21:28:26.696179    9020 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	* I0310 21:28:26.696558    9020 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\client.key
	* I0310 21:28:26.696973    9020 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.key.b6188fac
	* I0310 21:28:26.696973    9020 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.crt.b6188fac with IP's: [192.168.49.97 10.96.0.1 127.0.0.1 10.0.0.1]
	* I0310 21:28:26.990104    9020 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.crt.b6188fac ...
	* I0310 21:28:26.990104    9020 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.crt.b6188fac: {Name:mkd6267bd124d67754b39ddc65d651c49c7a4c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:28:27.012346    9020 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.key.b6188fac ...
	* I0310 21:28:27.012707    9020 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.key.b6188fac: {Name:mkbd2a25d02763d6c7723b638ec574a431cdda16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:28:27.030041    9020 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.crt.b6188fac -> C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.crt
	* I0310 21:28:27.034455    9020 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.key.b6188fac -> C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.key
	* I0310 21:28:27.037762    9020 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\proxy-client.key
	* I0310 21:28:27.037762    9020 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\proxy-client.crt with IP's: []
	* I0310 21:28:27.379105    9020 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\proxy-client.crt ...
	* I0310 21:28:27.379105    9020 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\proxy-client.crt: {Name:mkfacc959702057f8fde995a62fb2714ef45ced0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:28:27.402107    9020 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\proxy-client.key ...
	* I0310 21:28:27.402641    9020 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\proxy-client.key: {Name:mkaf52ae98292dd7ec752406948b9dbf897adf58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	* I0310 21:28:27.411077    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	* W0310 21:28:27.411077    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.411077    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	* W0310 21:28:27.411077    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.411077    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	* W0310 21:28:27.411077    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.411077    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	* W0310 21:28:27.411077    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.411077    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	* W0310 21:28:27.411077    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.417548    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	* W0310 21:28:27.418083    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.418083    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	* W0310 21:28:27.418403    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.418403    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	* W0310 21:28:27.418914    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.419257    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	* W0310 21:28:27.419446    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.419752    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	* W0310 21:28:27.420240    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.420505    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	* W0310 21:28:27.421080    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.421263    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	* W0310 21:28:27.421713    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.421713    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	* W0310 21:28:27.422186    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.422533    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	* W0310 21:28:27.422656    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.423176    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	* W0310 21:28:27.423461    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.423642    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	* W0310 21:28:27.423896    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.423896    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	* W0310 21:28:27.423896    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.423896    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	* W0310 21:28:27.424647    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.424647    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	* W0310 21:28:27.424647    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.424647    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	* W0310 21:28:27.424647    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.425646    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	* W0310 21:28:27.425646    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.425646    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	* W0310 21:28:27.426644    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.426644    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	* W0310 21:28:27.426644    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.426644    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	* W0310 21:28:27.427642    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.427642    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	* W0310 21:28:27.427642    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.427642    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	* W0310 21:28:27.432929    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.433265    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	* W0310 21:28:27.433265    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.433265    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	* W0310 21:28:27.434202    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.434202    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	* W0310 21:28:27.434202    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.434901    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	* W0310 21:28:27.435343    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.435343    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	* W0310 21:28:27.435343    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.435343    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	* W0310 21:28:27.436878    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.437524    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	* W0310 21:28:27.438440    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.438673    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	* W0310 21:28:27.439141    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.439141    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	* W0310 21:28:27.439141    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.439141    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	* W0310 21:28:27.439141    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.439141    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	* W0310 21:28:27.439141    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.439141    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	* W0310 21:28:27.439141    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.439141    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	* W0310 21:28:27.439141    9020 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	* I0310 21:28:27.439141    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	* I0310 21:28:27.439141    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	* I0310 21:28:27.439141    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	* I0310 21:28:27.439141    9020 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	* I0310 21:28:27.454314    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	* I0310 21:28:27.832770    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	* I0310 21:28:28.259964    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	* I0310 21:28:28.492011    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\kindnet-20210310212518-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	* I0310 21:28:28.137134    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:29.259370    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:30.619561    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:27.192991   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format=: (10.8957117s)
	* I0310 21:28:27.193742   22316 logs.go:255] 0 containers: []
	* W0310 21:28:27.194173   22316 logs.go:257] No container was found matching "coredns"
	* I0310 21:28:27.211606   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format=
	* I0310 21:28:30.854831    8404 cli_runner.go:168] Completed: docker run --rm --name bridge-20210310212817-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210310212817-6496 --entrypoint /usr/bin/test -v bridge-20210310212817-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (3.8647417s)
	* I0310 21:28:30.854831    8404 oci.go:106] Successfully prepared a docker volume bridge-20210310212817-6496
	* I0310 21:28:30.854831    8404 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:28:30.855844    8404 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:28:30.855844    8404 kic.go:175] Starting extracting preloaded images to volume ...
	* I0310 21:28:30.865558    8404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	* I0310 21:28:30.865558    8404 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20210310212817-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	* W0310 21:28:31.550484    8404 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20210310212817-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	* I0310 21:28:31.550484    8404 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20210310212817-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	* stdout:
	* 
	* stderr:
	* docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	* 
	* The notification platform is unavailable.
	* 	���
	* 
	* ���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	*    at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	* --- End of stack trace from previous location where exception was thrown ---
	*    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	*    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	*    at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	* �������?8
	* CreateToastNotifier
	* Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	* Windows.UI.Notifications.ToastNotificationManager
	* Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	* ���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	* ���+The notification platform is unavailable.
	* 	������������RestrictedErrorReference
	* 	
���
���������RestrictedCapabilitySid
	* 	������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	* See 'docker run --help'.
	* I0310 21:28:31.848367    8404 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:28:31.4491496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://i
ndex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	* I0310 21:28:31.856724    8404 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	* I0310 21:28:28.868875    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	* I0310 21:28:29.115101    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	* I0310 21:28:29.596755    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	* I0310 21:28:29.966183    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	* I0310 21:28:30.240342    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	* I0310 21:28:30.528290    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	* I0310 21:28:30.992856    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	* I0310 21:28:31.472515    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	* I0310 21:28:31.757126    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	* I0310 21:28:32.102555    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	* I0310 21:28:32.518155    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	* I0310 21:28:32.752995    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	* I0310 21:28:33.048306    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	* I0310 21:28:33.287199    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	* I0310 21:28:33.685674    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	* I0310 21:28:31.734666    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:32.648538    7648 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	* I0310 21:28:33.160365    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:33.538386    7648 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	* I0310 21:28:34.282900    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:35.643829    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:32.855005    8404 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20210310212817-6496 --name bridge-20210310212817-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210310212817-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20210310212817-6496 --volume bridge-20210310212817-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	* I0310 21:28:33.935611    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	* I0310 21:28:34.310700    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	* I0310 21:28:34.929246    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	* I0310 21:28:35.322209    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	* I0310 21:28:35.650379    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	* I0310 21:28:36.074610    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	* I0310 21:28:36.261876    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	* I0310 21:28:36.515063    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	* I0310 21:28:36.695319    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	* I0310 21:28:36.957396    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	* I0310 21:28:37.339369    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	* I0310 21:28:37.608675    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	* I0310 21:28:37.853644    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	* I0310 21:28:38.225404    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	* I0310 21:28:38.626531    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	* I0310 21:28:37.140021    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:40.061841    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000b1b8d0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:41.221772    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f52160}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:36.841716   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format=: (9.629928s)
	* I0310 21:28:36.841716   22316 logs.go:255] 1 containers: [1f40f04d70b6]
	* I0310 21:28:36.849449   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format=
	* I0310 21:28:37.635813    8404 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20210310212817-6496 --name bridge-20210310212817-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20210310212817-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20210310212817-6496 --volume bridge-20210310212817-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (4.7808255s)
	* I0310 21:28:37.650899    8404 cli_runner.go:115] Run: docker container inspect bridge-20210310212817-6496 --format=
	* I0310 21:28:38.273052    8404 cli_runner.go:115] Run: docker container inspect bridge-20210310212817-6496 --format=
	* I0310 21:28:38.881459    8404 cli_runner.go:115] Run: docker exec bridge-20210310212817-6496 stat /var/lib/dpkg/alternatives/iptables
	* I0310 21:28:39.954754    8404 cli_runner.go:168] Completed: docker exec bridge-20210310212817-6496 stat /var/lib/dpkg/alternatives/iptables: (1.0732987s)
	* I0310 21:28:39.954754    8404 oci.go:278] the created container "bridge-20210310212817-6496" has a running status.
	* I0310 21:28:39.954754    8404 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\bridge-20210310212817-6496\id_rsa...
	* I0310 21:28:40.166449    8404 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\bridge-20210310212817-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	* I0310 21:28:41.196617    8404 cli_runner.go:115] Run: docker container inspect bridge-20210310212817-6496 --format=
	* I0310 21:28:41.793830    8404 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	* I0310 21:28:41.793830    8404 kic_runner.go:115] Args: [docker exec --privileged bridge-20210310212817-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	* I0310 21:28:39.049069    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	* I0310 21:28:39.340943    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	* I0310 21:28:39.603522    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	* I0310 21:28:39.982936    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	* I0310 21:28:40.297490    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	* I0310 21:28:40.549362    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	* I0310 21:28:40.979439    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	* I0310 21:28:41.288710    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	* I0310 21:28:41.676745    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	* I0310 21:28:42.087930    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	* I0310 21:28:42.448187    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	* I0310 21:28:42.885439    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	* I0310 21:28:43.269262    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	* I0310 21:28:43.666110    9020 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	* I0310 21:28:42.304185    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0010a1610}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:44.142855    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00130f3b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:45.313438    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001597bf0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:44.938954   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format=: (8.0893686s)
	* I0310 21:28:44.938954   22316 logs.go:255] 0 containers: []
	* W0310 21:28:44.938954   22316 logs.go:257] No container was found matching "kube-proxy"
	* I0310 21:28:44.948793   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=
	* I0310 21:28:43.019598    8404 kic_runner.go:124] Done: [docker exec --privileged bridge-20210310212817-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.2252299s)
	* I0310 21:28:43.022083    8404 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\bridge-20210310212817-6496\id_rsa...
	* I0310 21:28:43.839967    8404 cli_runner.go:115] Run: docker container inspect bridge-20210310212817-6496 --format=
	* I0310 21:28:44.423064    8404 machine.go:88] provisioning docker machine ...
	* I0310 21:28:44.423711    8404 ubuntu.go:169] provisioning hostname "bridge-20210310212817-6496"
	* I0310 21:28:44.429450    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:28:45.083502    8404 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:28:45.092670    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}
	* I0310 21:28:45.092955    8404 main.go:121] libmachine: About to run SSH command:
	* sudo hostname bridge-20210310212817-6496 && echo "bridge-20210310212817-6496" | sudo tee /etc/hostname
	* I0310 21:28:45.101871    8404 main.go:121] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	* I0310 21:28:43.964263    9020 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	* I0310 21:28:44.124761    9020 ssh_runner.go:149] Run: openssl version
	* I0310 21:28:44.214631    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	* I0310 21:28:44.337360    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	* I0310 21:28:44.388868    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	* I0310 21:28:44.392807    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	* I0310 21:28:44.469839    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:44.560792    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	* I0310 21:28:44.651580    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	* I0310 21:28:44.688158    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	* I0310 21:28:44.697904    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	* I0310 21:28:44.757209    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:44.855100    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	* I0310 21:28:44.969314    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	* I0310 21:28:45.025060    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	* I0310 21:28:45.025060    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	* I0310 21:28:45.115852    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:45.200953    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	* I0310 21:28:45.369533    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	* I0310 21:28:45.403247    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	* I0310 21:28:45.428479    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	* I0310 21:28:45.508317    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:45.593122    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	* I0310 21:28:45.687288    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	* I0310 21:28:45.721604    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	* I0310 21:28:45.727676    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	* I0310 21:28:45.814464    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:45.904533    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	* I0310 21:28:46.030257    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	* I0310 21:28:46.144552    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	* I0310 21:28:46.154801    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	* I0310 21:28:46.214831    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:46.329211    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	* I0310 21:28:46.513200    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	* I0310 21:28:46.561797    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	* I0310 21:28:46.570688    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	* I0310 21:28:46.640061    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:46.750874    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	* I0310 21:28:46.850674    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	* I0310 21:28:46.889293    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	* I0310 21:28:46.910416    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	* I0310 21:28:46.980701    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:47.108696    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	* I0310 21:28:47.244585    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:28:47.283176    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:28:47.286116    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	* I0310 21:28:47.346873    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	* I0310 21:28:47.433614    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	* I0310 21:28:47.507028    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	* I0310 21:28:47.547054    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	* I0310 21:28:47.559389    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	* I0310 21:28:47.607515    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:47.729079    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	* I0310 21:28:47.854645    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	* I0310 21:28:47.895669    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	* I0310 21:28:47.913171    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	* I0310 21:28:48.004904    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:48.107293    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	* I0310 21:28:48.194084    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	* I0310 21:28:48.231818    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	* I0310 21:28:48.254820    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	* I0310 21:28:48.330411    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:48.422227    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	* I0310 21:28:48.509000    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	* I0310 21:28:48.541755    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	* I0310 21:28:48.545648    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	* I0310 21:28:48.641783    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:48.729875    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	* I0310 21:28:46.388229    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00171e510}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:47.743692    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000e906c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:49.279854    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000597ab0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:49.163814    8404 main.go:121] libmachine: SSH cmd err, output: <nil>: bridge-20210310212817-6496
	* 
	* I0310 21:28:49.172452    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:28:49.768668    8404 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:28:49.769433    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}
	* I0310 21:28:49.769676    8404 main.go:121] libmachine: About to run SSH command:
	* 
	* 		if ! grep -xq '.*\sbridge-20210310212817-6496' /etc/hosts; then
	* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
	* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-20210310212817-6496/g' /etc/hosts;
	* 			else 
	* 				echo '127.0.1.1 bridge-20210310212817-6496' | sudo tee -a /etc/hosts; 
	* 			fi
	* 		fi
	* I0310 21:28:50.722968    8404 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	* I0310 21:28:50.722968    8404 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	* I0310 21:28:50.722968    8404 ubuntu.go:177] setting up certificates
	* I0310 21:28:50.722968    8404 provision.go:83] configureAuth start
	* I0310 21:28:50.724694    8404 cli_runner.go:115] Run: docker container inspect -f "" bridge-20210310212817-6496
	* I0310 21:28:51.377370    8404 provision.go:137] copyHostCerts
	* I0310 21:28:51.378964    8404 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	* I0310 21:28:51.378964    8404 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	* I0310 21:28:51.379397    8404 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	* I0310 21:28:51.382379    8404 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	* I0310 21:28:51.383364    8404 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	* I0310 21:28:51.383364    8404 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	* I0310 21:28:51.386383    8404 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	* I0310 21:28:51.386383    8404 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	* I0310 21:28:51.386383    8404 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	* I0310 21:28:51.390092    8404 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.bridge-20210310212817-6496 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube bridge-20210310212817-6496]
	* I0310 21:28:51.683788    8404 provision.go:165] copyRemoteCerts
	* I0310 21:28:51.687078    8404 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	* I0310 21:28:51.702378    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:28:48.836728    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	* I0310 21:28:48.863652    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	* I0310 21:28:48.877691    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	* I0310 21:28:48.967225    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:49.052419    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	* I0310 21:28:49.135043    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	* I0310 21:28:49.237349    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	* I0310 21:28:49.248936    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	* I0310 21:28:49.307958    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:49.509631    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	* I0310 21:28:49.619673    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	* I0310 21:28:49.664879    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	* I0310 21:28:49.674177    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	* I0310 21:28:49.745827    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:49.818681    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	* I0310 21:28:49.933415    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	* I0310 21:28:49.968978    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	* I0310 21:28:49.980569    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	* I0310 21:28:50.044277    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:50.130657    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	* I0310 21:28:50.184512    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	* I0310 21:28:50.224581    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	* I0310 21:28:50.236955    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	* I0310 21:28:50.277486    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:50.426520    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	* I0310 21:28:50.488236    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	* I0310 21:28:50.522275    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	* I0310 21:28:50.537379    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	* I0310 21:28:50.594339    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:50.708246    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	* I0310 21:28:50.818657    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	* I0310 21:28:50.884078    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	* I0310 21:28:50.895544    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	* I0310 21:28:50.981584    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:51.103981    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	* I0310 21:28:51.253901    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	* I0310 21:28:51.289332    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	* I0310 21:28:51.302854    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	* I0310 21:28:51.369243    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:51.738664    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	* I0310 21:28:51.912365    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	* I0310 21:28:51.938081    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	* I0310 21:28:51.957134    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	* I0310 21:28:52.019943    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:52.164222    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	* I0310 21:28:52.256647    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	* I0310 21:28:52.367830    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	* I0310 21:28:52.378840    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	* I0310 21:28:52.424563    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:52.516296    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	* I0310 21:28:52.607078    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	* I0310 21:28:52.647778    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	* I0310 21:28:52.657755    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	* I0310 21:28:52.707656    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:52.840929    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	* I0310 21:28:52.937635    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	* I0310 21:28:52.963645    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	* I0310 21:28:52.969231    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	* I0310 21:28:53.023868    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:53.104323    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	* I0310 21:28:53.221286    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	* I0310 21:28:53.257510    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	* I0310 21:28:53.272061    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	* I0310 21:28:53.336987    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:53.422275    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	* I0310 21:28:53.512858    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	* I0310 21:28:53.557545    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	* I0310 21:28:53.568992    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	* I0310 21:28:53.624808    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:53.735622    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	* I0310 21:28:53.531498   12868 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (35.4838184s)
	* I0310 21:28:53.537728   12868 logs.go:122] Gathering logs for kube-apiserver [cc2004a03eb1] ...
	* I0310 21:28:53.538061   12868 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cc2004a03eb1"
	* I0310 21:28:51.877458    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000ef4560}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:53.350826    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0011a1f00}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:54.711655    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001349c30}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:56.114794    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012b3e70}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:53.291611   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format=: (8.3426386s)
	* I0310 21:28:53.291921   22316 logs.go:255] 0 containers: []
	* W0310 21:28:53.291921   22316 logs.go:257] No container was found matching "kubernetes-dashboard"
	* I0310 21:28:53.306419   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format=
	* I0310 21:28:52.312998    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55218 SSHKeyPath:C:\Users\jenkins\.minikube\machines\bridge-20210310212817-6496\id_rsa Username:docker}
	* I0310 21:28:52.725448    8404 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.0383737s)
	* I0310 21:28:52.726412    8404 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	* I0310 21:28:53.328551    8404 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	* I0310 21:28:53.810854    8404 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	* I0310 21:28:54.559932    8404 provision.go:86] duration metric: configureAuth took 3.8369784s
	* I0310 21:28:54.560067    8404 ubuntu.go:193] setting minikube options for container-runtime
	* I0310 21:28:54.568339    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:28:55.211592    8404 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:28:55.211956    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}
	* I0310 21:28:55.211956    8404 main.go:121] libmachine: About to run SSH command:
	* df --output=fstype / | tail -n 1
	* I0310 21:28:55.783068    8404 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	* 
	* I0310 21:28:55.783068    8404 ubuntu.go:71] root file system type: overlay
	* I0310 21:28:55.783068    8404 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	* I0310 21:28:55.798766    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:28:56.458892    8404 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:28:56.459460    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}
	* I0310 21:28:56.459460    8404 main.go:121] libmachine: About to run SSH command:
	* sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP \$MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* " | sudo tee /lib/systemd/system/docker.service.new
	* I0310 21:28:53.851891    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	* I0310 21:28:53.873416    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	* I0310 21:28:53.874686    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	* I0310 21:28:53.934340    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:53.997052    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	* I0310 21:28:54.123989    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	* I0310 21:28:54.152762    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	* I0310 21:28:54.161380    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	* I0310 21:28:54.237333    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:54.351384    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	* I0310 21:28:54.440656    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	* I0310 21:28:54.464316    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	* I0310 21:28:54.485005    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	* I0310 21:28:54.531832    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:54.643568    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	* I0310 21:28:54.741288    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	* I0310 21:28:54.790666    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	* I0310 21:28:54.810338    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	* I0310 21:28:54.919693    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:55.018001    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	* I0310 21:28:55.120435    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	* I0310 21:28:55.160060    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	* I0310 21:28:55.173963    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	* I0310 21:28:55.255187    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:55.336943    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	* I0310 21:28:55.428222    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	* I0310 21:28:55.459353    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	* I0310 21:28:55.463414    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	* I0310 21:28:55.604670    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:55.697362    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	* I0310 21:28:55.818929    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	* I0310 21:28:55.849281    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	* I0310 21:28:55.860393    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	* I0310 21:28:55.949362    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:56.077727    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	* I0310 21:28:56.177731    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	* I0310 21:28:56.218676    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	* I0310 21:28:56.242416    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	* I0310 21:28:56.296070    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:56.395647    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	* I0310 21:28:56.554482    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	* I0310 21:28:56.601877    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	* I0310 21:28:56.625427    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	* I0310 21:28:56.760871    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:56.902935    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	* I0310 21:28:57.065395    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	* I0310 21:28:57.100029    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	* I0310 21:28:57.112666    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	* I0310 21:28:57.166856    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:57.243947    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	* I0310 21:28:57.354542    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	* I0310 21:28:57.396392    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	* I0310 21:28:57.406836    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	* I0310 21:28:57.505354    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:57.589934    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	* I0310 21:28:57.750240    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	* I0310 21:28:57.807987    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	* I0310 21:28:57.818963    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	* I0310 21:28:57.896390    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:57.991792    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	* I0310 21:28:58.105036    9020 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	* I0310 21:28:58.144987    9020 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	* I0310 21:28:58.154391    9020 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	* I0310 21:28:58.214090    9020 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	* I0310 21:28:58.290912    9020 kubeadm.go:385] StartCluster: {Name:kindnet-20210310212518-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kindnet-20210310212518-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	* I0310 21:28:58.292895    9020 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:28:57.304422    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0015fbb80}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:28:58.373379    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001767090}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:00.355329    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000e809b0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:00.234308   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format=: (6.9279134s)
	* I0310 21:29:00.234308   22316 logs.go:255] 0 containers: []
	* W0310 21:29:00.234308   22316 logs.go:257] No container was found matching "storage-provisioner"
	* I0310 21:29:00.243752   22316 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format=
	* I0310 21:28:57.189937    8404 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	* Description=Docker Application Container Engine
	* Documentation=https://docs.docker.com
	* BindsTo=containerd.service
	* After=network-online.target firewalld.service containerd.service
	* Wants=network-online.target
	* Requires=docker.socket
	* StartLimitBurst=3
	* StartLimitIntervalSec=60
	* 
	* [Service]
	* Type=notify
	* Restart=on-failure
	* 
	* 
	* 
	* # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* # The base configuration already specifies an 'ExecStart=...' command. The first directive
	* # here is to clear out that command inherited from the base configuration. Without this,
	* # the command from the base configuration and the command specified here are treated as
	* # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* # will catch this invalid input and refuse to start the service with an error like:
	* #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* 
	* # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* ExecStart=
	* ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* ExecReload=/bin/kill -s HUP $MAINPID
	* 
	* # Having non-zero Limit*s causes performance problems due to accounting overhead
	* # in the kernel. We recommend using cgroups to do container-local accounting.
	* LimitNOFILE=infinity
	* LimitNPROC=infinity
	* LimitCORE=infinity
	* 
	* # Uncomment TasksMax if your systemd version supports it.
	* # Only systemd 226 and above support this version.
	* TasksMax=infinity
	* TimeoutStartSec=0
	* 
	* # set delegate yes so that systemd does not reset the cgroups of docker containers
	* Delegate=yes
	* 
	* # kill only the docker process, not all processes in the cgroup
	* KillMode=process
	* 
	* [Install]
	* WantedBy=multi-user.target
	* 
	* I0310 21:28:57.202951    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:28:57.840511    8404 main.go:121] libmachine: Using SSH client type: native
	* I0310 21:28:57.840511    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}
	* I0310 21:28:57.840511    8404 main.go:121] libmachine: About to run SSH command:
	* sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	* I0310 21:28:58.925410    9020 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	* I0310 21:28:59.020586    9020 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	* I0310 21:28:59.128150    9020 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 21:28:59.138042    9020 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 21:28:59.293743    9020 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 21:28:59.293743    9020 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 21:29:01.094450   12868 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 cc2004a03eb1": (7.5560726s)
	* I0310 21:29:01.122510   12868 logs.go:122] Gathering logs for Docker ...
	* I0310 21:29:01.122510   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	* I0310 21:29:02.121344   12868 logs.go:122] Gathering logs for container status ...
	* I0310 21:29:02.121344   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 21:29:01.625639    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0012e0ff0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:03.112935    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d90b00}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:04.324221    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0011ac2c0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:06.235285    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000f58e10}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:03.568487   22316 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format=: (3.3247464s)
	* I0310 21:29:03.569010   22316 logs.go:255] 1 containers: [69707ea57db5]
	* I0310 21:29:03.569010   22316 logs.go:122] Gathering logs for etcd [e9fcf0291799] ...
	* I0310 21:29:03.569010   22316 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 e9fcf0291799"
	* I0310 21:29:06.228059   22316 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 e9fcf0291799": (2.6590572s)
	* I0310 21:29:06.255715   22316 logs.go:122] Gathering logs for kube-scheduler [1f40f04d70b6] ...
	* I0310 21:29:06.256016   22316 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 1f40f04d70b6"
	* I0310 21:29:06.252243   12868 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (4.1309132s)
	* I0310 21:29:06.261201   12868 logs.go:122] Gathering logs for kubelet ...
	* I0310 21:29:06.261201   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	* I0310 21:29:07.175951    7648 ssh_runner.go:189] Completed: docker images --format :: (44.9175801s)
	* I0310 21:29:07.176476    7648 docker.go:423] Got preloaded images: -- stdout --
	* k8s.gcr.io/kube-proxy:v1.20.2
	* k8s.gcr.io/kube-controller-manager:v1.20.2
	* k8s.gcr.io/kube-apiserver:v1.20.2
	* k8s.gcr.io/kube-scheduler:v1.20.2
	* kubernetesui/dashboard:v2.1.0
	* gcr.io/k8s-minikube/storage-provisioner:v4
	* k8s.gcr.io/etcd:3.4.13-0
	* k8s.gcr.io/coredns:1.7.0
	* kubernetesui/metrics-scraper:v1.0.4
	* k8s.gcr.io/pause:3.2
	* 
	* -- /stdout --
	* I0310 21:29:07.176476    7648 docker.go:429] minikube-local-cache-test:functional-20210105233232-2512 wasn't preloaded
	* I0310 21:29:07.176476    7648 cache_images.go:76] LoadImages start: [minikube-local-cache-test:functional-20210105233232-2512 minikube-local-cache-test:functional-20210115023213-8464 minikube-local-cache-test:functional-20210120022529-1140 minikube-local-cache-test:functional-20210128021318-232 minikube-local-cache-test:functional-20210107002220-9088 minikube-local-cache-test:functional-20210220004129-7452 minikube-local-cache-test:functional-20210303214129-4588 minikube-local-cache-test:functional-20210219220622-3920 minikube-local-cache-test:functional-20210304184021-4052 minikube-local-cache-test:functional-20210106002159-6856 minikube-local-cache-test:functional-20210106011107-6492 minikube-local-cache-test:functional-20210123004019-5372 minikube-local-cache-test:functional-20210212145109-352 minikube-local-cache-test:functional-20210112045103-7160 minikube-local-cache-test:functional-20210119220838-6552 minikube-local-cache-test:functional-20210301195830-5700 minikube-local-cache-test:function
al-20210304002630-1156 minikube-local-cache-test:functional-20210306072141-12056 minikube-local-cache-test:functional-20210310083645-5040 minikube-local-cache-test:functional-20210106215525-1984 minikube-local-cache-test:functional-20210115191024-3516 minikube-local-cache-test:functional-20210219145454-9520 minikube-local-cache-test:functional-20210224014800-800 minikube-local-cache-test:functional-20210308233820-5396 minikube-local-cache-test:functional-20210309234032-4944 minikube-local-cache-test:functional-20210114204234-6692 minikube-local-cache-test:functional-20210120175851-7432 minikube-local-cache-test:functional-20210120214442-10992 minikube-local-cache-test:functional-20210126212539-5172 minikube-local-cache-test:functional-20210107190945-8748 minikube-local-cache-test:functional-20210213143925-7440 minikube-local-cache-test:functional-20210310191609-6496 minikube-local-cache-test:functional-20210120231122-7024 minikube-local-cache-test:functional-20210225231842-5736]
	* I0310 21:29:07.247655    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210308233820-5396
	* I0310 21:29:07.255269    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120022529-1140
	* I0310 21:29:07.290948    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210303214129-4588
	* I0310 21:29:07.299085    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120175851-7432
	* I0310 21:29:07.301088    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210219145454-9520
	* I0310 21:29:07.311489    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210115023213-8464
	* I0310 21:29:07.318482    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210306072141-12056
	* I0310 21:29:07.339838    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210105233232-2512
	* I0310 21:29:07.339838    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210112045103-7160
	* I0310 21:29:07.381821    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210119220838-6552
	* I0310 21:29:07.414796    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210112045103-7160: Error response from daemon: reference does not exist
	* I0310 21:29:07.438906    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106215525-1984
	* I0310 21:29:07.443742    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106002159-6856
	* I0310 21:29:07.477911    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210114204234-6692
	* I0310 21:29:07.481914    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210219220622-3920
	* I0310 21:29:07.482920    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210304002630-1156
	* I0310 21:29:07.508226    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210212145109-352
	* I0310 21:29:07.520813    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106002159-6856: Error response from daemon: reference does not exist
	* I0310 21:29:07.529517    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120214442-10992
	* I0310 21:29:07.538874    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210213143925-7440
	* I0310 21:29:07.564257    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210115191024-3516
	* I0310 21:29:07.564257    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107190945-8748
	* I0310 21:29:07.582641    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210106011107-6492
	* I0310 21:29:07.610659    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210304184021-4052
	* I0310 21:29:07.620307    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210310191609-6496
	* I0310 21:29:07.620307    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210224014800-800
	* W0310 21:29:07.625730    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210112045103-7160 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:29:07.638172    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107190945-8748: Error response from daemon: reference does not exist
	* I0310 21:29:07.663188    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210301195830-5700
	* W0310 21:29:07.679587    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106002159-6856 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:29:07.697751    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210120231122-7024
	* I0310 21:29:07.726814    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210220004129-7452
	* I0310 21:29:07.754744    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106011107-6492: Error response from daemon: reference does not exist
	* I0310 21:29:07.760100    7648 image.go:168] retrieving image: minikube-local-cache-test:functional-20210107002220-9088
	* I0310 21:29:07.762688    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210123004019-5372
	* I0310 21:29:07.762688    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210310083645-5040
	* I0310 21:29:07.773934    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210225231842-5736
	* I0310 21:29:07.808531    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210107002220-9088: Error response from daemon: reference does not exist
	* I0310 21:29:07.818075    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210106215525-1984: Error response from daemon: reference does not exist
	* I0310 21:29:07.819365    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0016a0860}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:07.828903    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210126212539-5172
	* W0310 21:29:07.832926    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107190945-8748 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:29:07.851899    7648 image.go:176] daemon lookup for minikube-local-cache-test:functional-20210105233232-2512: Error response from daemon: reference does not exist
	* I0310 21:29:07.862363    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210309234032-4944
	* I0310 21:29:07.873411    7648 ssh_runner.go:149] Run: docker image inspect --format  minikube-local-cache-test:functional-20210128021318-232
	* W0310 21:29:07.925009    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106011107-6492 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:29:07.957446    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106002159-6856 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106002159-6856: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:29:07.957656    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210106002159-6856" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106002159-6856
	* I0310 21:29:07.957656    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106002159-6856 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 21:29:07.957656    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856
	* I0310 21:29:07.973596    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210112045103-7160 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210112045103-7160: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:29:07.973596    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210112045103-7160" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210112045103-7160
	* I0310 21:29:07.973596    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210112045103-7160 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 21:29:07.973596    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160
	* I0310 21:29:07.977472    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856
	* I0310 21:29:07.985449    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160
	* W0310 21:29:07.993431    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210107002220-9088 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* W0310 21:29:08.011621    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210106215525-1984 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:29:08.011621    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107190945-8748 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107190945-8748: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:29:08.012422    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210107190945-8748" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107190945-8748
	* I0310 21:29:08.012422    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107190945-8748 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:29:08.012422    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748
	* W0310 21:29:08.019510    7648 image.go:185] authn lookup for minikube-local-cache-test:functional-20210105233232-2512 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	* I0310 21:29:08.021502    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748
	* I0310 21:29:08.102733    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106011107-6492 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106011107-6492: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:29:08.102733    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210106011107-6492" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106011107-6492
	* I0310 21:29:08.102733    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106011107-6492 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:29:08.102733    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:29:08.111748    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210107002220-9088 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210107002220-9088: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:29:08.111748    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210107002220-9088" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210107002220-9088
	* I0310 21:29:08.111748    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210107002220-9088 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 21:29:08.111748    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088
	* I0310 21:29:08.122034    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492
	* I0310 21:29:08.122034    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088
	* I0310 21:29:08.148352    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210106215525-1984 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210106215525-1984: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:29:08.148652    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210106215525-1984" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210106215525-1984
	* I0310 21:29:08.149109    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210106215525-1984 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:29:08.149109    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:29:08.165622    7648 image.go:74] error retrieve Image minikube-local-cache-test:functional-20210105233232-2512 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210105233232-2512: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	* I0310 21:29:08.165622    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210105233232-2512" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210105233232-2512
	* I0310 21:29:08.165622    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210105233232-2512 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:29:08.165622    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:29:08.169242    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984
	* I0310 21:29:08.176244    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512
	* I0310 21:29:09.567317    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc000d2f450}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* W0310 21:29:10.051982    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:10.051982    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210224014800-800" needs transfer: "minikube-local-cache-test:functional-20210224014800-800" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:10.051982    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210224014800-800 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* I0310 21:29:10.051982    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210224014800-800
	* W0310 21:29:10.051982    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:29:10.051982    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:10.060493    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210304002630-1156" needs transfer: "minikube-local-cache-test:functional-20210304002630-1156" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:10.060493    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210304184021-4052" needs transfer: "minikube-local-cache-test:functional-20210304184021-4052" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* W0310 21:29:10.060493    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:10.060493    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304184021-4052 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 21:29:10.060493    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304184021-4052
	* I0310 21:29:10.060493    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210304002630-1156 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:29:10.060690    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210304002630-1156
	* W0310 21:29:10.060690    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:10.060690    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210115191024-3516" needs transfer: "minikube-local-cache-test:functional-20210115191024-3516" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* W0310 21:29:10.060493    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:29:10.051982    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:10.060690    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115191024-3516 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 21:29:10.061176    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115191024-3516
	* I0310 21:29:10.060493    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210120214442-10992" needs transfer: "minikube-local-cache-test:functional-20210120214442-10992" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:10.061176    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120214442-10992 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* I0310 21:29:10.061632    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120214442-10992
	* I0310 21:29:10.060690    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210213143925-7440" needs transfer: "minikube-local-cache-test:functional-20210213143925-7440" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* W0310 21:29:10.060493    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:10.062091    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210212145109-352" needs transfer: "minikube-local-cache-test:functional-20210212145109-352" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:10.060690    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210310191609-6496" needs transfer: "minikube-local-cache-test:functional-20210310191609-6496" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:10.062091    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210212145109-352 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 21:29:10.062091    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210212145109-352
	* I0310 21:29:10.062370    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310191609-6496 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 21:29:10.062091    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210213143925-7440 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 21:29:10.062370    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210213143925-7440
	* I0310 21:29:10.062370    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310191609-6496
	* I0310 21:29:10.179063    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115191024-3516
	* I0310 21:29:10.193864    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210224014800-800
	* I0310 21:29:10.196851    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:10.197950    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304184021-4052
	* I0310 21:29:10.198469    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210304002630-1156
	* I0310 21:29:10.212681    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:10.216143    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:10.222593    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:10.231927    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210212145109-352
	* I0310 21:29:10.237399    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310191609-6496
	* I0310 21:29:10.244314    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210213143925-7440
	* I0310 21:29:10.245969    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120214442-10992
	* I0310 21:29:10.271257    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:10.272296    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:10.274661    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:10.279212    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:10.852741    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc0018aaeb0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:11.106054    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:11.111331    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:11.132864    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:11.225802    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:11.237782    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:11.238475    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.0253617s)
	* I0310 21:29:11.238475    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:11.240376    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.0177872s)
	* I0310 21:29:11.240671    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:11.262814    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:08.023502    8404 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	* +++ /lib/systemd/system/docker.service.new	2021-03-10 21:28:57.181175000 +0000
	* @@ -1,30 +1,32 @@
	*  [Unit]
	*  Description=Docker Application Container Engine
	*  Documentation=https://docs.docker.com
	* +BindsTo=containerd.service
	*  After=network-online.target firewalld.service containerd.service
	*  Wants=network-online.target
	* -Requires=docker.socket containerd.service
	* +Requires=docker.socket
	* +StartLimitBurst=3
	* +StartLimitIntervalSec=60
	*  
	*  [Service]
	*  Type=notify
	* -# the default is not to use systemd for cgroups because the delegate issues still
	* -# exists and systemd currently does not support the cgroup feature set required
	* -# for containers run by docker
	* -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	* -ExecReload=/bin/kill -s HUP $MAINPID
	* -TimeoutSec=0
	* -RestartSec=2
	* -Restart=always
	* -
	* -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	* -# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	* -# to make them work for either version of systemd.
	* -StartLimitBurst=3
	* +Restart=on-failure
	*  
	* -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	* -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	* -# this option work for either version of systemd.
	* -StartLimitInterval=60s
	* +
	* +
	* +# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	* +# The base configuration already specifies an 'ExecStart=...' command. The first directive
	* +# here is to clear out that command inherited from the base configuration. Without this,
	* +# the command from the base configuration and the command specified here are treated as
	* +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	* +# will catch this invalid input and refuse to start the service with an error like:
	* +#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	* +
	* +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	* +# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	* +ExecStart=
	* +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	* +ExecReload=/bin/kill -s HUP $MAINPID
	*  
	*  # Having non-zero Limit*s causes performance problems due to accounting overhead
	*  # in the kernel. We recommend using cgroups to do container-local accounting.
	* @@ -32,16 +34,16 @@
	*  LimitNPROC=infinity
	*  LimitCORE=infinity
	*  
	* -# Comment TasksMax if your systemd version does not support it.
	* -# Only systemd 226 and above support this option.
	* +# Uncomment TasksMax if your systemd version supports it.
	* +# Only systemd 226 and above support this version.
	*  TasksMax=infinity
	* +TimeoutStartSec=0
	*  
	*  # set delegate yes so that systemd does not reset the cgroups of docker containers
	*  Delegate=yes
	*  
	*  # kill only the docker process, not all processes in the cgroup
	*  KillMode=process
	* -OOMScoreAdjust=-500
	*  
	*  [Install]
	*  WantedBy=multi-user.target
	* Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	* Executing: /lib/systemd/systemd-sysv-install enable docker
	* 
	* I0310 21:29:08.023502    8404 machine.go:91] provisioned docker machine in 23.600521s
	* I0310 21:29:08.023502    8404 client.go:171] LocalClient.Create took 44.9323022s
	* I0310 21:29:08.023502    8404 start.go:168] duration metric: libmachine.API.Create for "bridge-20210310212817-6496" took 44.9325888s
	* I0310 21:29:08.023502    8404 start.go:267] post-start starting for "bridge-20210310212817-6496" (driver="docker")
	* I0310 21:29:08.023502    8404 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	* I0310 21:29:08.040824    8404 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	* I0310 21:29:08.047824    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:29:08.680721    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55218 SSHKeyPath:C:\Users\jenkins\.minikube\machines\bridge-20210310212817-6496\id_rsa Username:docker}
	* I0310 21:29:09.046513    8404 ssh_runner.go:149] Run: cat /etc/os-release
	* I0310 21:29:09.072541    8404 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	* I0310 21:29:09.073741    8404 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	* I0310 21:29:09.073741    8404 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	* I0310 21:29:09.073741    8404 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	* I0310 21:29:09.073741    8404 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	* I0310 21:29:09.073741    8404 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	* I0310 21:29:09.073741    8404 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	* I0310 21:29:09.087670    8404 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	* I0310 21:29:09.103516    8404 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	* I0310 21:29:09.170243    8404 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	* I0310 21:29:09.340139    8404 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	* I0310 21:29:09.684263    8404 start.go:270] post-start completed in 1.6607662s
	* I0310 21:29:09.718725    8404 cli_runner.go:115] Run: docker container inspect -f "" bridge-20210310212817-6496
	* I0310 21:29:10.510669    8404 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\bridge-20210310212817-6496\config.json ...
	* I0310 21:29:10.581920    8404 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	* I0310 21:29:10.594910    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:29:11.341568    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55218 SSHKeyPath:C:\Users\jenkins\.minikube\machines\bridge-20210310212817-6496\id_rsa Username:docker}
	* I0310 21:29:11.764969    8404 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.183053s)
	* I0310 21:29:11.765504    8404 start.go:129] duration metric: createHost completed in 48.6878847s
	* I0310 21:29:11.765504    8404 start.go:80] releasing machines lock for "bridge-20210310212817-6496", held for 48.6884811s
	* I0310 21:29:11.776061    8404 cli_runner.go:115] Run: docker container inspect -f "" bridge-20210310212817-6496
	* I0310 21:29:13.523966    9020 out.go:150]   - Generating certificates and keys ...
	* I0310 21:29:09.148516   12868 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (2.8873241s)
	* I0310 21:29:09.208030   12868 logs.go:122] Gathering logs for dmesg ...
	* I0310 21:29:09.208030   12868 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	* I0310 21:29:11.222431   12868 ssh_runner.go:189] Completed: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400": (2.0144085s)
	* I0310 21:29:11.225802   12868 logs.go:122] Gathering logs for etcd [e2b3a62f4f6c] ...
	* I0310 21:29:11.225802   12868 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 e2b3a62f4f6c"
	* W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:29:12.219037    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:29:12.219037    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.219037    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.219037    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210225231842-5736" needs transfer: "minikube-local-cache-test:functional-20210225231842-5736" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* W0310 21:29:12.219037    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.219037    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210225231842-5736 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 21:29:12.219037    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210225231842-5736
	* I0310 21:29:12.219037    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.219037    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210112045103-7160 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210112045103-7160 (4096 bytes)
	* I0310 21:29:12.219037    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106011107-6492 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106011107-6492 (4096 bytes)
	* W0310 21:29:12.219037    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.219303    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210128021318-232" needs transfer: "minikube-local-cache-test:functional-20210128021318-232" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:12.219303    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210128021318-232 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:29:12.219303    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210128021318-232
	* W0310 21:29:12.219303    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.219303    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.219523    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106002159-6856 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106002159-6856 (4096 bytes)
	* I0310 21:29:12.219037    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.221131    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210105233232-2512 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210105233232-2512 (4096 bytes)
	* I0310 21:29:12.219037    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210310083645-5040" needs transfer: "minikube-local-cache-test:functional-20210310083645-5040" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* W0310 21:29:12.219037    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* W0310 21:29:12.221131    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.221357    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088: NewSession: ssh: rejected: connect failed (open failed)
	* W0310 21:29:12.221357    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.221357    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107002220-9088 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107002220-9088 (4096 bytes)
	* I0310 21:29:12.221357    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210309234032-4944" needs transfer: "minikube-local-cache-test:functional-20210309234032-4944" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:12.221357    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210309234032-4944 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* I0310 21:29:12.221357    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210309234032-4944
	* W0310 21:29:12.218820    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.221679    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210301195830-5700" needs transfer: "minikube-local-cache-test:functional-20210301195830-5700" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:12.221679    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210301195830-5700 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:29:12.221679    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.221679    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210107190945-8748 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210107190945-8748 (4096 bytes)
	* W0310 21:29:12.221131    7648 ssh_runner.go:83] session error, resetting client: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.222371    7648 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984: NewSession: ssh: rejected: connect failed (open failed)
	* I0310 21:29:12.222647    7648 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210106215525-1984 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210106215525-1984 (4096 bytes)
	* I0310 21:29:12.221679    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:29:12.221679    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210310083645-5040 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 21:29:12.223634    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210310083645-5040
	* I0310 21:29:12.223982    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210120231122-7024" needs transfer: "minikube-local-cache-test:functional-20210120231122-7024" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:12.224075    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120231122-7024 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* I0310 21:29:12.224335    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120231122-7024
	* I0310 21:29:12.225305    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210123004019-5372" needs transfer: "minikube-local-cache-test:functional-20210123004019-5372" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:12.225305    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210123004019-5372 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:29:12.225305    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:29:12.225305    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210220004129-7452" needs transfer: "minikube-local-cache-test:functional-20210220004129-7452" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:12.225305    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210220004129-7452 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:29:12.225305    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:29:12.225768    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210126212539-5172" needs transfer: "minikube-local-cache-test:functional-20210126212539-5172" does not exist at hash "sha256:3124ee2c11ad89161c4ef8b91314a0e3bcd64c842e8c75039ff0df529b2ab61b" in container runtime
	* I0310 21:29:12.225768    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210126212539-5172 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:29:12.226128    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:29:12.391924    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.416122    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.441889    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210225231842-5736
	* I0310 21:29:12.442983    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210128021318-232
	* I0310 21:29:12.450184    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.469003    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.470012    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.472269    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210309234032-4944
	* I0310 21:29:12.473442    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210310083645-5040
	* I0310 21:29:12.478917    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.482708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.482708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.482708    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210220004129-7452
	* I0310 21:29:12.489532    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.494515    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.494515    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.512965    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210123004019-5372
	* I0310 21:29:12.523392    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.531282    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210301195830-5700
	* I0310 21:29:12.532240    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120231122-7024
	* I0310 21:29:12.536332    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210126212539-5172
	* I0310 21:29:12.586607    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.593200    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.601819    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:12.614808    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:13.364764    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00025f1e0}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:13.664797    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1858841s)
	* I0310 21:29:13.665215    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.676365    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2073662s)
	* I0310 21:29:13.676365    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.692939    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2423118s)
	* I0310 21:29:13.693435    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.706955    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.224251s)
	* I0310 21:29:13.706955    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.710366    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2208376s)
	* I0310 21:29:13.710877    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.752581    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3364631s)
	* I0310 21:29:13.752581    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.783528    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.26014s)
	* I0310 21:29:13.783928    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.798563    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2053673s)
	* I0310 21:29:13.798830    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3522633s)
	* I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3644439s)
	* I0310 21:29:13.847147    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.847147    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4552282s)
	* I0310 21:29:13.848379    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.895821    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4258137s)
	* I0310 21:29:13.895821    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.909373    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2945693s)
	* I0310 21:29:13.910097    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.920958    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3343558s)
	* I0310 21:29:13.921784    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:13.960114    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4652305s)
	* I0310 21:29:13.961080    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:14.057432    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4556187s)
	* I0310 21:29:14.057820    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:16.078502    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00123c780}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:12.537539    8404 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	* I0310 21:29:12.564019    8404 ssh_runner.go:149] Run: systemctl --version
	* I0310 21:29:12.607883    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:29:12.617936    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:29:13.830768    8404 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496: (1.2126032s)
	* I0310 21:29:13.831018    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55218 SSHKeyPath:C:\Users\jenkins\.minikube\machines\bridge-20210310212817-6496\id_rsa Username:docker}
	* I0310 21:29:14.036549    8404 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496: (1.419165s)
	* I0310 21:29:14.036972    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55218 SSHKeyPath:C:\Users\jenkins\.minikube\machines\bridge-20210310212817-6496\id_rsa Username:docker}
	* I0310 21:29:14.306078    8404 ssh_runner.go:189] Completed: systemctl --version: (1.7417308s)
	* I0310 21:29:14.315866    8404 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	* I0310 21:29:14.471363    8404 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:29:14.823912    8404 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.2860599s)
	* I0310 21:29:14.823912    8404 cruntime.go:206] skipping containerd shutdown because we are bound to it
	* I0310 21:29:14.836116    8404 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	* I0310 21:29:14.915069    8404 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	* image-endpoint: unix:///var/run/dockershim.sock
	* " | sudo tee /etc/crictl.yaml"
	* I0310 21:29:15.142132    8404 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	* I0310 21:29:15.249537    8404 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	* I0310 21:29:16.256410    8404 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.0068771s)
	* I0310 21:29:16.278557    8404 ssh_runner.go:149] Run: sudo systemctl start docker
	* I0310 21:29:16.382503    8404 ssh_runner.go:149] Run: docker version --format 
	* I0310 21:29:17.645345    8404 ssh_runner.go:189] Completed: docker version --format : (1.262847s)
	* I0310 21:29:17.175614    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001785970}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:17.177938    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210303214129-4588" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:29:17.178108    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210303214129-4588 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 21:29:17.178108    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210303214129-4588
	* I0310 21:29:17.180645    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210120022529-1140" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:29:17.180645    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120022529-1140 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 21:29:17.180645    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120022529-1140
	* I0310 21:29:17.180645    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210306072141-12056" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:29:17.180645    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210306072141-12056 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:29:17.180645    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:29:17.180645    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210114204234-6692" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:29:17.180645    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210114204234-6692 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 21:29:17.180645    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210114204234-6692
	* I0310 21:29:17.180645    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210219220622-3920" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:29:17.180645    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219220622-3920 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 21:29:17.180645    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219220622-3920
	* I0310 21:29:17.180645    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210308233820-5396" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:29:17.180645    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210308233820-5396 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:29:17.180645    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:29:17.182625    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210219145454-9520" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:29:17.182625    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210219145454-9520 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 21:29:17.182625    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210219145454-9520
	* I0310 21:29:17.182625    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210115023213-8464" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:29:17.182625    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210115023213-8464 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:29:17.182625    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:29:17.183941    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210120175851-7432" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:29:17.183941    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210120175851-7432 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:29:17.183941    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:29:17.189694    7648 cache_images.go:104] "minikube-local-cache-test:functional-20210119220838-6552" needs transfer: needs transfer timed out in 10.000000 seconds
	* I0310 21:29:17.189694    7648 localpath.go:146] windows sanitize: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test:functional-20210119220838-6552 -> C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 21:29:17.189694    7648 cache_images.go:237] Loading image from cache: C:\Users\jenkins\.minikube\cache\images\minikube-local-cache-test_functional-20210119220838-6552
	* I0310 21:29:17.218621    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210303214129-4588
	* I0310 21:29:17.263615    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120022529-1140
	* I0310 21:29:17.277604    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210306072141-12056
	* I0310 21:29:17.278608    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219220622-3920
	* I0310 21:29:17.282614    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210114204234-6692
	* I0310 21:29:17.303499    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210219145454-9520
	* I0310 21:29:17.304677    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:17.310571    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210120175851-7432
	* I0310 21:29:17.310571    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210308233820-5396
	* I0310 21:29:17.310571    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210119220838-6552
	* I0310 21:29:17.311498    7648 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210115023213-8464
	* I0310 21:29:17.312495    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:17.353686    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:17.357459    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:17.363637    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:17.398564    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:17.400613    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:17.401567    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:17.402557    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:17.405567    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	* I0310 21:29:18.456760    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1520876s)
	* I0310 21:29:18.457243    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:18.457920    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1454284s)
	* I0310 21:29:18.458224    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:18.566607    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1640537s)
	* I0310 21:29:18.567024    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:18.568617    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.168008s)
	* I0310 21:29:18.568869    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:18.599483    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2009233s)
	* I0310 21:29:18.599855    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:18.620002    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2661034s)
	* I0310 21:29:18.620155    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:18.635235    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2336719s)
	* I0310 21:29:18.635519    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:18.647351    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2835677s)
	* I0310 21:29:18.647351    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:18.662304    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2567412s)
	* I0310 21:29:18.662304    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:18.663296    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3054878s)
	* I0310 21:29:18.663296    7648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55188 SSHKeyPath:C:\Users\jenkins\.minikube\machines\cilium-20210310211546-6496\id_rsa Username:docker}
	* I0310 21:29:19.438143    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc001089920}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:20.840856    7648 pod_ready.go:102] pod "coredns-74ff55c5b-rh4j9" in "kube-system" namespace is not Running: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:28:01 +0000 GMT Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-10 21:27:21 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.2 PodIP: PodIPs:[] StartTime:2021-03-10 21:28:01 +0000 GMT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiti
ng:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/coredns:1.7.0 ImageID: ContainerID: Started:0xc00035d150}] QOSClass:Burstable EphemeralContainerStatuses:[]}
	* I0310 21:29:17.205650   22316 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 1f40f04d70b6": (10.9496715s)
	* I0310 21:29:17.231614   22316 logs.go:122] Gathering logs for Docker ...
	* I0310 21:29:17.231614   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	* I0310 21:29:19.255764   22316 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (2.0241573s)
	* I0310 21:29:19.265019   22316 logs.go:122] Gathering logs for container status ...
	* I0310 21:29:19.265204   22316 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	* I0310 21:29:20.441413   13364 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m24.5191686s)
	* I0310 21:29:20.455920   13364 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	* I0310 21:29:20.625916   13364 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format=
	* I0310 21:29:21.493676   13364 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	* I0310 21:29:21.508041   13364 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	* I0310 21:29:21.774904   13364 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	* stdout:
	* 
	* stderr:
	* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	* I0310 21:29:21.775204   13364 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	* I0310 21:29:17.650253    8404 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	* I0310 21:29:17.664088    8404 cli_runner.go:115] Run: docker exec -t bridge-20210310212817-6496 dig +short host.docker.internal
	* I0310 21:29:20.688173    8404 cli_runner.go:168] Completed: docker exec -t bridge-20210310212817-6496 dig +short host.docker.internal: (3.024095s)
	* I0310 21:29:20.688353    8404 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	* I0310 21:29:20.700978    8404 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	* I0310 21:29:20.761757    8404 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	* I0310 21:29:20.994646    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	* I0310 21:29:21.688759    8404 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\bridge-20210310212817-6496\client.crt
	* I0310 21:29:21.703409    8404 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\bridge-20210310212817-6496\client.key
	* I0310 21:29:21.708245    8404 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	* I0310 21:29:21.709663    8404 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	* I0310 21:29:21.719744    8404 ssh_runner.go:149] Run: docker images --format :

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 21:28:33.790663    6224 out.go:340] unable to execute * 2021-03-10 21:27:32.794492 W | etcdserver: request "header:<ID:11303041234760733094 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1149 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1056 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (208.7231ms) to execute
	: html/template:* 2021-03-10 21:27:32.794492 W | etcdserver: request "header:<ID:11303041234760733094 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1149 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1056 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (208.7231ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:28:33.841491    6224 out.go:340] unable to execute * 2021-03-10 21:27:33.513631 W | etcdserver: request "header:<ID:11303041234760733105 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:1147 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905957292 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (198.1095ms) to execute
	: html/template:* 2021-03-10 21:27:33.513631 W | etcdserver: request "header:<ID:11303041234760733105 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.9\" mod_revision:1147 > success:<request_put:<key:\"/registry/masterleases/172.17.0.9\" value_size:65 lease:2079669197905957292 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.9\" > >>" with result "size:16" took too long (198.1095ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:28:34.003498    6224 out.go:340] unable to execute * 2021-03-10 21:28:32.001826 W | etcdserver: request "header:<ID:11303041234760733324 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1187 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1056 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (135.6895ms) to execute
	: html/template:* 2021-03-10 21:28:32.001826 W | etcdserver: request "header:<ID:11303041234760733324 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1187 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1056 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (135.6895ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E0310 21:29:15.459409    6224 out.go:335] unable to parse "* I0310 21:28:18.047812    8404 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:28:18.047812    8404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:29:15.466404    6224 out.go:335] unable to parse "* I0310 21:28:19.060122    8404 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.0123141s)\n": template: * I0310 21:28:19.060122    8404 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0123141s)
	:1: function "json" not defined - returning raw string.
	E0310 21:29:16.386781    6224 out.go:335] unable to parse "* I0310 21:28:20.510062    8404 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:28:20.510062    8404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:29:16.393806    6224 out.go:335] unable to parse "* I0310 21:28:22.295184    8404 cli_runner.go:168] Completed: docker system info --format \"{{json .}}\": (1.7851283s)\n": template: * I0310 21:28:22.295184    8404 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.7851283s)
	:1: function "json" not defined - returning raw string.
	E0310 21:29:16.613852    6224 out.go:340] unable to execute * I0310 21:28:22.049708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:28:22.049708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:28:22.049708    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:16.626240    6224 out.go:340] unable to execute * I0310 21:28:22.272536    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:28:22.272536    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:28:22.272536    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:16.640478    6224 out.go:340] unable to execute * I0310 21:28:22.551013    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:28:22.551013    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:28:22.551013    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:16.683093    6224 out.go:340] unable to execute * I0310 21:28:23.378006    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:28:23.378006    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:28:23.378006    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:16.728095    6224 out.go:340] unable to execute * I0310 21:28:23.114718    8404 cli_runner.go:115] Run: docker network inspect bridge-20210310212817-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:28:23.114718    8404 cli_runner.go:115] Run: docker network inspect bridge-20210310212817-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:282: executing "* I0310 21:28:23.114718    8404 cli_runner.go:115] Run: docker network inspect bridge-20210310212817-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:16.734091    6224 out.go:340] unable to execute * W0310 21:28:23.750294    8404 cli_runner.go:162] docker network inspect bridge-20210310212817-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	: template: * W0310 21:28:23.750294    8404 cli_runner.go:162] docker network inspect bridge-20210310212817-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	:1:277: executing "* W0310 21:28:23.750294    8404 cli_runner.go:162] docker network inspect bridge-20210310212817-6496 --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:16.816649    6224 out.go:340] unable to execute * I0310 21:28:24.364515    8404 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	: template: * I0310 21:28:24.364515    8404 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	:1:262: executing "* I0310 21:28:24.364515    8404 cli_runner.go:115] Run: docker network inspect bridge --format \"{\"Name\": \"{{.Name}}\",\"Driver\": \"{{.Driver}}\",\"Subnet\": \"{{range .IPAM.Config}}{{.Subnet}}{{end}}\",\"Gateway\": \"{{range .IPAM.Config}}{{.Gateway}}{{end}}\",\"MTU\": {{if (index .Options \"com.docker.network.driver.mtu\")}}{{(index .Options \"com.docker.network.driver.mtu\")}}{{else}}0{{end}}, \"ContainerIPs\": [{{range $k,$v := .Containers }}\"{{$v.IPv4Address}}\",{{end}}]}\"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:17.985610    6224 out.go:335] unable to parse "* I0310 21:28:30.865558    8404 cli_runner.go:115] Run: docker system info --format \"{{json .}}\"\n": template: * I0310 21:28:30.865558    8404 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	:1: function "json" not defined - returning raw string.
	E0310 21:29:18.290267    6224 out.go:335] unable to parse "* I0310 21:28:31.856724    8404 cli_runner.go:115] Run: docker info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0310 21:28:31.856724    8404 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	:1: function "json" not defined - returning raw string.
	E0310 21:29:18.715975    6224 out.go:340] unable to execute * I0310 21:28:44.429450    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:28:44.429450    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:28:44.429450    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:18.725321    6224 out.go:335] unable to parse "* I0310 21:28:45.092670    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}\n": template: * I0310 21:28:45.092670    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:29:18.997665    6224 out.go:340] unable to execute * I0310 21:28:49.172452    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:28:49.172452    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:28:49.172452    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:19.016457    6224 out.go:335] unable to parse "* I0310 21:28:49.769433    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}\n": template: * I0310 21:28:49.769433    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:29:19.108774    6224 out.go:340] unable to execute * I0310 21:28:51.702378    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:28:51.702378    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:28:51.702378    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:19.492898    6224 out.go:340] unable to execute * I0310 21:28:54.568339    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:28:54.568339    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:28:54.568339    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:19.504891    6224 out.go:335] unable to parse "* I0310 21:28:55.211956    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}\n": template: * I0310 21:28:55.211956    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:29:19.539701    6224 out.go:340] unable to execute * I0310 21:28:55.798766    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:28:55.798766    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:28:55.798766    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:19.550893    6224 out.go:335] unable to parse "* I0310 21:28:56.459460    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}\n": template: * I0310 21:28:56.459460    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:29:20.357353    6224 out.go:340] unable to execute * I0310 21:28:57.202951    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:28:57.202951    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:28:57.202951    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:20.369243    6224 out.go:335] unable to parse "* I0310 21:28:57.840511    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}\n": template: * I0310 21:28:57.840511    8404 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55218 <nil> <nil>}
	:1: unexpected "{" in command - returning raw string.
	E0310 21:29:21.123419    6224 out.go:340] unable to execute * I0310 21:29:10.196851    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:10.196851    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:10.196851    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.136596    6224 out.go:340] unable to execute * I0310 21:29:10.212681    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:10.212681    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:10.212681    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.143172    6224 out.go:340] unable to execute * I0310 21:29:10.216143    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:10.216143    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:10.216143    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.151101    6224 out.go:340] unable to execute * I0310 21:29:10.222593    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:10.222593    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:10.222593    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.183456    6224 out.go:340] unable to execute * I0310 21:29:10.271257    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:10.271257    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:10.271257    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.190219    6224 out.go:340] unable to execute * I0310 21:29:10.272296    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:10.272296    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:10.272296    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.197200    6224 out.go:340] unable to execute * I0310 21:29:10.274661    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:10.274661    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:10.274661    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.205818    6224 out.go:340] unable to execute * I0310 21:29:10.279212    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:10.279212    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:10.279212    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.236378    6224 out.go:340] unable to execute * I0310 21:29:11.238475    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.0253617s)
	: template: * I0310 21:29:11.238475    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.0253617s)
	:1:102: executing "* I0310 21:29:11.238475    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.0253617s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.245385    6224 out.go:340] unable to execute * I0310 21:29:11.240376    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.0177872s)
	: template: * I0310 21:29:11.240376    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.0177872s)
	:1:102: executing "* I0310 21:29:11.240376    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.0177872s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.616737    6224 out.go:340] unable to execute * I0310 21:29:08.047824    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:29:08.047824    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:29:08.047824    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.701456    6224 out.go:340] unable to execute * I0310 21:29:10.594910    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:29:10.594910    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:29:10.594910    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.943464    6224 out.go:340] unable to execute * I0310 21:29:12.391924    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.391924    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.391924    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.951383    6224 out.go:340] unable to execute * I0310 21:29:12.416122    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.416122    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.416122    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.965592    6224 out.go:340] unable to execute * I0310 21:29:12.450184    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.450184    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.450184    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.975396    6224 out.go:340] unable to execute * I0310 21:29:12.469003    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.469003    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.469003    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:21.985553    6224 out.go:340] unable to execute * I0310 21:29:12.470012    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.470012    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.470012    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.011155    6224 out.go:340] unable to execute * I0310 21:29:12.478917    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.478917    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.478917    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.017533    6224 out.go:340] unable to execute * I0310 21:29:12.482708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.482708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.482708    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.023517    6224 out.go:340] unable to execute * I0310 21:29:12.482708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.482708    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.482708    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.031515    6224 out.go:340] unable to execute * I0310 21:29:12.489532    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.489532    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.489532    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.038656    6224 out.go:340] unable to execute * I0310 21:29:12.494515    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.494515    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.494515    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.045195    6224 out.go:340] unable to execute * I0310 21:29:12.494515    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.494515    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.494515    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.053914    6224 out.go:340] unable to execute * I0310 21:29:12.523392    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.523392    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.523392    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.081078    6224 out.go:340] unable to execute * I0310 21:29:12.586607    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.586607    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.586607    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.095244    6224 out.go:340] unable to execute * I0310 21:29:12.593200    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.593200    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.593200    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.113578    6224 out.go:340] unable to execute * I0310 21:29:12.601819    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.601819    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.601819    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.125137    6224 out.go:340] unable to execute * I0310 21:29:12.614808    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:12.614808    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:12.614808    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.138944    6224 out.go:340] unable to execute * I0310 21:29:13.664797    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1858841s)
	: template: * I0310 21:29:13.664797    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1858841s)
	:1:102: executing "* I0310 21:29:13.664797    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.1858841s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.157687    6224 out.go:340] unable to execute * I0310 21:29:13.676365    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2073662s)
	: template: * I0310 21:29:13.676365    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2073662s)
	:1:102: executing "* I0310 21:29:13.676365    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.2073662s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.176081    6224 out.go:340] unable to execute * I0310 21:29:13.692939    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2423118s)
	: template: * I0310 21:29:13.692939    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2423118s)
	:1:102: executing "* I0310 21:29:13.692939    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.2423118s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.203493    6224 out.go:340] unable to execute * I0310 21:29:13.706955    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.224251s)
	: template: * I0310 21:29:13.706955    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.224251s)
	:1:102: executing "* I0310 21:29:13.706955    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.224251s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.211803    6224 out.go:340] unable to execute * I0310 21:29:13.710366    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2208376s)
	: template: * I0310 21:29:13.710366    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2208376s)
	:1:102: executing "* I0310 21:29:13.710366    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.2208376s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.220829    6224 out.go:340] unable to execute * I0310 21:29:13.752581    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3364631s)
	: template: * I0310 21:29:13.752581    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3364631s)
	:1:102: executing "* I0310 21:29:13.752581    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.3364631s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.230834    6224 out.go:340] unable to execute * I0310 21:29:13.783528    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.26014s)
	: template: * I0310 21:29:13.783528    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.26014s)
	:1:102: executing "* I0310 21:29:13.783528    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.26014s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.248140    6224 out.go:340] unable to execute * I0310 21:29:13.798563    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2053673s)
	: template: * I0310 21:29:13.798563    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2053673s)
	:1:102: executing "* I0310 21:29:13.798563    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.2053673s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.255093    6224 out.go:340] unable to execute * I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3522633s)
	: template: * I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3522633s)
	:1:102: executing "* I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.3522633s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.261099    6224 out.go:340] unable to execute * I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3644439s)
	: template: * I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3644439s)
	:1:102: executing "* I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.3644439s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.271555    6224 out.go:340] unable to execute * I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4552282s)
	: template: * I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4552282s)
	:1:102: executing "* I0310 21:29:13.847147    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.4552282s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.279092    6224 out.go:340] unable to execute * I0310 21:29:13.895821    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4258137s)
	: template: * I0310 21:29:13.895821    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4258137s)
	:1:102: executing "* I0310 21:29:13.895821    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.4258137s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.287096    6224 out.go:340] unable to execute * I0310 21:29:13.909373    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2945693s)
	: template: * I0310 21:29:13.909373    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2945693s)
	:1:102: executing "* I0310 21:29:13.909373    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.2945693s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.298888    6224 out.go:340] unable to execute * I0310 21:29:13.920958    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3343558s)
	: template: * I0310 21:29:13.920958    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3343558s)
	:1:102: executing "* I0310 21:29:13.920958    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.3343558s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.308539    6224 out.go:340] unable to execute * I0310 21:29:13.960114    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4652305s)
	: template: * I0310 21:29:13.960114    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4652305s)
	:1:102: executing "* I0310 21:29:13.960114    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.4652305s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.324652    6224 out.go:340] unable to execute * I0310 21:29:14.057432    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4556187s)
	: template: * I0310 21:29:14.057432    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.4556187s)
	:1:102: executing "* I0310 21:29:14.057432    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.4556187s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.359323    6224 out.go:340] unable to execute * I0310 21:29:12.607883    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:29:12.607883    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:29:12.607883    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.371252    6224 out.go:340] unable to execute * I0310 21:29:12.617936    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:29:12.617936    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:29:12.617936    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.377250    6224 out.go:340] unable to execute * I0310 21:29:13.830768    8404 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496: (1.2126032s)
	: template: * I0310 21:29:13.830768    8404 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496: (1.2126032s)
	:1:102: executing "* I0310 21:29:13.830768    8404 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496: (1.2126032s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.392969    6224 out.go:340] unable to execute * I0310 21:29:14.036549    8404 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496: (1.419165s)
	: template: * I0310 21:29:14.036549    8404 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20210310212817-6496: (1.419165s)
	:1:102: executing "* I0310 21:29:14.036549    8404 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496: (1.419165s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.592367    6224 out.go:340] unable to execute * I0310 21:29:17.304677    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:17.304677    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:17.304677    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.615144    6224 out.go:340] unable to execute * I0310 21:29:17.312495    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:17.312495    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:17.312495    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.621165    6224 out.go:340] unable to execute * I0310 21:29:17.353686    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:17.353686    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:17.353686    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.627479    6224 out.go:340] unable to execute * I0310 21:29:17.357459    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:17.357459    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:17.357459    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.634205    6224 out.go:340] unable to execute * I0310 21:29:17.363637    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:17.363637    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:17.363637    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.642943    6224 out.go:340] unable to execute * I0310 21:29:17.398564    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:17.398564    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:17.398564    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.651261    6224 out.go:340] unable to execute * I0310 21:29:17.400613    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:17.400613    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:17.400613    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.659289    6224 out.go:340] unable to execute * I0310 21:29:17.401567    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:17.401567    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:17.401567    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.667909    6224 out.go:340] unable to execute * I0310 21:29:17.402557    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:17.402557    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:17.402557    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.675203    6224 out.go:340] unable to execute * I0310 21:29:17.405567    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	: template: * I0310 21:29:17.405567    7648 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496
	:1:96: executing "* I0310 21:29:17.405567    7648 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.686655    6224 out.go:340] unable to execute * I0310 21:29:18.456760    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1520876s)
	: template: * I0310 21:29:18.456760    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1520876s)
	:1:102: executing "* I0310 21:29:18.456760    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.1520876s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.698587    6224 out.go:340] unable to execute * I0310 21:29:18.457920    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1454284s)
	: template: * I0310 21:29:18.457920    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1454284s)
	:1:102: executing "* I0310 21:29:18.457920    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.1454284s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.717475    6224 out.go:340] unable to execute * I0310 21:29:18.566607    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1640537s)
	: template: * I0310 21:29:18.566607    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.1640537s)
	:1:102: executing "* I0310 21:29:18.566607    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.1640537s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.727489    6224 out.go:340] unable to execute * I0310 21:29:18.568617    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.168008s)
	: template: * I0310 21:29:18.568617    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.168008s)
	:1:102: executing "* I0310 21:29:18.568617    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.168008s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.738481    6224 out.go:340] unable to execute * I0310 21:29:18.599483    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2009233s)
	: template: * I0310 21:29:18.599483    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2009233s)
	:1:102: executing "* I0310 21:29:18.599483    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.2009233s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.766402    6224 out.go:340] unable to execute * I0310 21:29:18.620002    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2661034s)
	: template: * I0310 21:29:18.620002    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2661034s)
	:1:102: executing "* I0310 21:29:18.620002    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.2661034s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.774859    6224 out.go:340] unable to execute * I0310 21:29:18.635235    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2336719s)
	: template: * I0310 21:29:18.635235    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2336719s)
	:1:102: executing "* I0310 21:29:18.635235    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.2336719s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.783189    6224 out.go:340] unable to execute * I0310 21:29:18.647351    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2835677s)
	: template: * I0310 21:29:18.647351    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2835677s)
	:1:102: executing "* I0310 21:29:18.647351    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.2835677s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.795645    6224 out.go:340] unable to execute * I0310 21:29:18.662304    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2567412s)
	: template: * I0310 21:29:18.662304    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.2567412s)
	:1:102: executing "* I0310 21:29:18.662304    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.2567412s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.801761    6224 out.go:340] unable to execute * I0310 21:29:18.663296    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3054878s)
	: template: * I0310 21:29:18.663296    7648 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210310211546-6496: (1.3054878s)
	:1:102: executing "* I0310 21:29:18.663296    7648 cli_runner.go:168] Completed: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" cilium-20210310211546-6496: (1.3054878s)\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
	E0310 21:29:22.917935    6224 out.go:340] unable to execute * I0310 21:29:20.994646    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	: template: * I0310 21:29:20.994646    8404 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-20210310212817-6496
	:1:96: executing "* I0310 21:29:20.994646    8404 cli_runner.go:115] Run: docker container inspect -f \"'{{(index (index .NetworkSettings.Ports \"8443/tcp\") 0).HostPort}}'\" bridge-20210310212817-6496\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20210310205202-6496 -n default-k8s-different-port-20210310205202-6496: (30.5637479s)
helpers_test.go:257: (dbg) Run:  kubectl --context default-k8s-different-port-20210310205202-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:257: (dbg) Done: kubectl --context default-k8s-different-port-20210310205202-6496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (8.9856708s)
helpers_test.go:263: non-running pods: 
helpers_test.go:265: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context default-k8s-different-port-20210310205202-6496 describe pod 
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20210310205202-6496 describe pod : exit status 1 (219.4522ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context default-k8s-different-port-20210310205202-6496 describe pod : exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (808.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (983.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-weave-20210310211916-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p custom-weave-20210310211916-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker: exit status 109 (16m22.843254s)

                                                
                                                
-- stdout --
	* [custom-weave-20210310211916-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on user configuration
	
	
	* Starting control plane node custom-weave-20210310211916-6496 in cluster custom-weave-20210310211916-6496
	* Creating docker container (CPUs=2, Memory=1800MB) ...
	* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 21:19:17.185426   13364 out.go:239] Setting OutFile to fd 3016 ...
	I0310 21:19:17.186416   13364 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:19:17.186416   13364 out.go:252] Setting ErrFile to fd 2968...
	I0310 21:19:17.186416   13364 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 21:19:17.201428   13364 out.go:246] Setting JSON to false
	I0310 21:19:17.208505   13364 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":36623,"bootTime":1615374534,"procs":114,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 21:19:17.209348   13364 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 21:19:17.611459   13364 out.go:129] * [custom-weave-20210310211916-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 21:19:17.666878   13364 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 21:19:17.672236   13364 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 21:19:18.452811   13364 docker.go:119] docker version: linux-20.10.2
	I0310 21:19:18.466511   13364 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:19:19.515678   13364 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0489584s)
	I0310 21:19:19.516849   13364 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:19:19.0331326 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:19:19.748915   13364 out.go:129] * Using the docker driver based on user configuration
	I0310 21:19:19.749572   13364 start.go:276] selected driver: docker
	I0310 21:19:19.749892   13364 start.go:718] validating driver "docker" against <nil>
	I0310 21:19:19.749892   13364 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 21:19:21.690035   13364 out.go:129] 
	W0310 21:19:21.690910   13364 out.go:191] X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	X Docker Desktop only has 20001MiB available, you may encounter application deployment failures.
	W0310 21:19:21.691851   13364 out.go:191] * Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	* Suggestion: 
	
	    1. Open the "Docker Desktop" menu by clicking the Docker icon in the system tray
	    2. Click "Settings"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0310 21:19:21.691851   13364 out.go:191] * Documentation: https://docs.docker.com/docker-for-windows/#resources
	* Documentation: https://docs.docker.com/docker-for-windows/#resources
	I0310 21:19:21.694252   13364 out.go:129] 
	I0310 21:19:21.718817   13364 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:19:22.642994   13364 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:19:22.2236182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:19:22.642994   13364 start_flags.go:253] no existing cluster config was found, will generate one from the flags 
	I0310 21:19:22.644019   13364 start_flags.go:717] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0310 21:19:22.644019   13364 cni.go:74] Creating CNI manager for "testdata\\weavenet.yaml"
	I0310 21:19:22.644019   13364 start_flags.go:393] Found "testdata\\weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0310 21:19:22.644019   13364 start_flags.go:398] config:
	{Name:custom-weave-20210310211916-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210310211916-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:19:22.648329   13364 out.go:129] * Starting control plane node custom-weave-20210310211916-6496 in cluster custom-weave-20210310211916-6496
	I0310 21:19:23.284628   13364 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull
	I0310 21:19:23.284628   13364 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull
	I0310 21:19:23.286039   13364 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:19:23.286039   13364 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:19:23.286039   13364 cache.go:54] Caching tarball of preloaded images
	I0310 21:19:23.286039   13364 preload.go:131] Found C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0310 21:19:23.286772   13364 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
	I0310 21:19:23.286772   13364 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\config.json ...
	I0310 21:19:23.286772   13364 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\config.json: {Name:mkc476d656886ec8725c6298ca7e5a7f8fe30c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:19:23.301273   13364 cache.go:185] Successfully downloaded all kic artifacts
	I0310 21:19:23.301273   13364 start.go:313] acquiring machines lock for custom-weave-20210310211916-6496: {Name:mk446b3e268c75dc4305737c572edd1081e0a5b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0310 21:19:23.301273   13364 start.go:317] acquired machines lock for "custom-weave-20210310211916-6496" in 0s
	I0310 21:19:23.301273   13364 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20210310211916-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210310211916-6496 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0310 21:19:23.309815   13364 start.go:126] createHost starting for "" (driver="docker")
	I0310 21:19:23.324305   13364 out.go:150] * Creating docker container (CPUs=2, Memory=1800MB) ...
	I0310 21:19:23.326940   13364 start.go:160] libmachine.API.Create for "custom-weave-20210310211916-6496" (driver="docker")
	I0310 21:19:23.326940   13364 client.go:168] LocalClient.Create starting
	I0310 21:19:23.326940   13364 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\ca.pem
	I0310 21:19:23.326940   13364 main.go:121] libmachine: Decoding PEM data...
	I0310 21:19:23.326940   13364 main.go:121] libmachine: Parsing certificate...
	I0310 21:19:23.326940   13364 main.go:121] libmachine: Reading certificate data from C:\Users\jenkins\.minikube\certs\cert.pem
	I0310 21:19:23.326940   13364 main.go:121] libmachine: Decoding PEM data...
	I0310 21:19:23.326940   13364 main.go:121] libmachine: Parsing certificate...
	I0310 21:19:23.355621   13364 cli_runner.go:115] Run: docker network inspect custom-weave-20210310211916-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0310 21:19:23.945979   13364 cli_runner.go:162] docker network inspect custom-weave-20210310211916-6496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0310 21:19:23.963064   13364 network_create.go:240] running [docker network inspect custom-weave-20210310211916-6496] to gather additional debugging logs...
	I0310 21:19:23.963064   13364 cli_runner.go:115] Run: docker network inspect custom-weave-20210310211916-6496
	W0310 21:19:24.531820   13364 cli_runner.go:162] docker network inspect custom-weave-20210310211916-6496 returned with exit code 1
	I0310 21:19:24.531820   13364 network_create.go:243] error running [docker network inspect custom-weave-20210310211916-6496]: docker network inspect custom-weave-20210310211916-6496: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20210310211916-6496
	I0310 21:19:24.531820   13364 network_create.go:245] output of [docker network inspect custom-weave-20210310211916-6496]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20210310211916-6496
	
	** /stderr **
	I0310 21:19:24.545208   13364 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0310 21:19:25.236662   13364 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0310 21:19:25.237110   13364 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: custom-weave-20210310211916-6496 and gateway 192.168.49.1 and MTU of 1500 ...
	I0310 21:19:25.243983   13364 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210310211916-6496
	W0310 21:19:25.780916   13364 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210310211916-6496 returned with exit code 1
	W0310 21:19:25.782158   13364 out.go:191] ! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	! Unable to create dedicated network, this might result in cluster IP change after restart: failed to create network after 20 attempts
	I0310 21:19:25.801237   13364 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0310 21:19:26.390354   13364 cli_runner.go:115] Run: docker volume create custom-weave-20210310211916-6496 --label name.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --label created_by.minikube.sigs.k8s.io=true
	I0310 21:19:27.003490   13364 oci.go:102] Successfully created a docker volume custom-weave-20210310211916-6496
	I0310 21:19:27.015739   13364 cli_runner.go:115] Run: docker run --rm --name custom-weave-20210310211916-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --entrypoint /usr/bin/test -v custom-weave-20210310211916-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib
	I0310 21:19:31.907812   13364 cli_runner.go:168] Completed: docker run --rm --name custom-weave-20210310211916-6496-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --entrypoint /usr/bin/test -v custom-weave-20210310211916-6496:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (4.8919644s)
	I0310 21:19:31.907920   13364 oci.go:106] Successfully prepared a docker volume custom-weave-20210310211916-6496
	I0310 21:19:31.908325   13364 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:19:31.909054   13364 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:19:31.909189   13364 kic.go:175] Starting extracting preloaded images to volume ...
	I0310 21:19:31.919197   13364 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 21:19:31.919975   13364 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210310211916-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir
	W0310 21:19:32.633517   13364 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210310211916-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
	I0310 21:19:32.633517   13364 kic.go:182] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210310211916-6496:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: status code not OK but 500: ����????����������System.Exception���	ClassNameMessageDataInnerExceptionHelpURLStackTraceStringRemoteStackTraceStringRemoteStackIndexExceptionMethodHResultSource
WatsonBuckets��)System.Collections.ListDictionaryInternalSystem.Exception���System.Exception���XThe notification platform is unavailable.
	
	The notification platform is unavailable.
		���
	
	���?   at Windows.UI.Notifications.ToastNotificationManager.CreateToastNotifier(String applicationId)
	   at Docker.WPF.PromptShareDirectory.<PromptUserAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.WPF\PromptShareDirectory.cs:line 53
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__8.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 95
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__6.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.ApiServices\Mounting\FileSharing.cs:line 55
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\workspaces\PR-15138\src\github.com\docker\pinata\win\src\Docker.HttpApi\Controllers\FilesharingController.cs:line 21
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()
	--- End of stack trace from previous location where exception was thrown ---
	   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
	   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
	   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()
	�������?8
	CreateToastNotifier
	Windows.UI, Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime
	Windows.UI.Notifications.ToastNotificationManager
	Windows.UI.Notifications.ToastNotifier CreateToastNotifier(System.String)>?����
	���)System.Collections.ListDictionaryInternal���headversioncount��8System.Collections.ListDictionaryInternal+DictionaryNode	������������8System.Collections.ListDictionaryInternal+DictionaryNode���keyvaluenext8System.Collections.ListDictionaryInternal+DictionaryNode	���RestrictedDescription
	���+The notification platform is unavailable.
		������������RestrictedErrorReference
		
���
���������RestrictedCapabilitySid
		������������__RestrictedErrorObject	���	������(System.Exception+__RestrictedErrorObject�������������"__HasRestrictedLanguageErrorObject�.
	See 'docker run --help'.
	I0310 21:19:32.931561   13364 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.0123655s)
	I0310 21:19:32.931891   13364 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:73 SystemTime:2021-03-10 21:19:32.4721638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 21:19:32.941872   13364 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0310 21:19:33.895863   13364 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210310211916-6496 --name custom-weave-20210310211916-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --volume custom-weave-20210310211916-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e
	I0310 21:19:38.229689   13364 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210310211916-6496 --name custom-weave-20210310211916-6496 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210310211916-6496 --volume custom-weave-20210310211916-6496:/var --security-opt apparmor=unconfined --memory=1800mb --memory-swap=1800mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (4.3335497s)
	I0310 21:19:38.241967   13364 cli_runner.go:115] Run: docker container inspect custom-weave-20210310211916-6496 --format={{.State.Running}}
	I0310 21:19:38.858868   13364 cli_runner.go:115] Run: docker container inspect custom-weave-20210310211916-6496 --format={{.State.Status}}
	I0310 21:19:39.421271   13364 cli_runner.go:115] Run: docker exec custom-weave-20210310211916-6496 stat /var/lib/dpkg/alternatives/iptables
	I0310 21:19:40.485437   13364 cli_runner.go:168] Completed: docker exec custom-weave-20210310211916-6496 stat /var/lib/dpkg/alternatives/iptables: (1.064168s)
	I0310 21:19:40.485437   13364 oci.go:278] the created container "custom-weave-20210310211916-6496" has a running status.
	I0310 21:19:40.486283   13364 kic.go:206] Creating ssh key for kic: C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa...
	I0310 21:19:40.889920   13364 kic_runner.go:188] docker (temp): C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0310 21:19:42.581972   13364 cli_runner.go:115] Run: docker container inspect custom-weave-20210310211916-6496 --format={{.State.Status}}
	I0310 21:19:43.189145   13364 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0310 21:19:43.189252   13364 kic_runner.go:115] Args: [docker exec --privileged custom-weave-20210310211916-6496 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0310 21:19:44.841461   13364 kic_runner.go:124] Done: [docker exec --privileged custom-weave-20210310211916-6496 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.6522103s)
	I0310 21:19:44.841461   13364 kic.go:240] ensuring only current user has permissions to key file located at : C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa...
	I0310 21:19:45.703061   13364 cli_runner.go:115] Run: docker container inspect custom-weave-20210310211916-6496 --format={{.State.Status}}
	I0310 21:19:46.275015   13364 machine.go:88] provisioning docker machine ...
	I0310 21:19:46.275015   13364 ubuntu.go:169] provisioning hostname "custom-weave-20210310211916-6496"
	I0310 21:19:46.284451   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:19:46.882800   13364 main.go:121] libmachine: Using SSH client type: native
	I0310 21:19:46.883929   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	I0310 21:19:46.883929   13364 main.go:121] libmachine: About to run SSH command:
	sudo hostname custom-weave-20210310211916-6496 && echo "custom-weave-20210310211916-6496" | sudo tee /etc/hostname
	I0310 21:19:47.997393   13364 main.go:121] libmachine: SSH cmd err, output: <nil>: custom-weave-20210310211916-6496
	
	I0310 21:19:48.009317   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:19:48.742115   13364 main.go:121] libmachine: Using SSH client type: native
	I0310 21:19:48.743072   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	I0310 21:19:48.743072   13364 main.go:121] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20210310211916-6496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20210310211916-6496/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20210310211916-6496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0310 21:19:49.600213   13364 main.go:121] libmachine: SSH cmd err, output: <nil>: 
	I0310 21:19:49.600213   13364 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins\.minikube CaCertPath:C:\Users\jenkins\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins\.minikube}
	I0310 21:19:49.600213   13364 ubuntu.go:177] setting up certificates
	I0310 21:19:49.600213   13364 provision.go:83] configureAuth start
	I0310 21:19:49.615307   13364 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210310211916-6496
	I0310 21:19:50.312485   13364 provision.go:137] copyHostCerts
	I0310 21:19:50.313336   13364 exec_runner.go:145] found C:\Users\jenkins\.minikube/ca.pem, removing ...
	I0310 21:19:50.313518   13364 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\ca.pem
	I0310 21:19:50.313955   13364 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\ca.pem --> C:\Users\jenkins\.minikube/ca.pem (1078 bytes)
	I0310 21:19:50.317952   13364 exec_runner.go:145] found C:\Users\jenkins\.minikube/cert.pem, removing ...
	I0310 21:19:50.318097   13364 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\cert.pem
	I0310 21:19:50.318897   13364 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\cert.pem --> C:\Users\jenkins\.minikube/cert.pem (1123 bytes)
	I0310 21:19:50.324526   13364 exec_runner.go:145] found C:\Users\jenkins\.minikube/key.pem, removing ...
	I0310 21:19:50.324526   13364 exec_runner.go:190] rm: C:\Users\jenkins\.minikube\key.pem
	I0310 21:19:50.324869   13364 exec_runner.go:152] cp: C:\Users\jenkins\.minikube\certs\key.pem --> C:\Users\jenkins\.minikube/key.pem (1679 bytes)
	I0310 21:19:50.326695   13364 provision.go:111] generating server cert: C:\Users\jenkins\.minikube\machines\server.pem ca-key=C:\Users\jenkins\.minikube\certs\ca.pem private-key=C:\Users\jenkins\.minikube\certs\ca-key.pem org=jenkins.custom-weave-20210310211916-6496 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20210310211916-6496]
	I0310 21:19:50.473545   13364 provision.go:165] copyRemoteCerts
	I0310 21:19:50.491383   13364 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0310 21:19:50.498028   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:19:51.157275   13364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55203 SSHKeyPath:C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa Username:docker}
	I0310 21:19:51.810069   13364 ssh_runner.go:189] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3186869s)
	I0310 21:19:51.810747   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	I0310 21:19:52.426896   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0310 21:19:52.758745   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0310 21:19:53.246446   13364 provision.go:86] duration metric: configureAuth took 3.6462374s
	I0310 21:19:53.246968   13364 ubuntu.go:193] setting minikube options for container-runtime
	I0310 21:19:53.259429   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:19:53.877314   13364 main.go:121] libmachine: Using SSH client type: native
	I0310 21:19:53.877988   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	I0310 21:19:53.878264   13364 main.go:121] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0310 21:19:54.493154   13364 main.go:121] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0310 21:19:54.493154   13364 ubuntu.go:71] root file system type: overlay
	I0310 21:19:54.493392   13364 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
	I0310 21:19:54.503924   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:19:55.120891   13364 main.go:121] libmachine: Using SSH client type: native
	I0310 21:19:55.120891   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	I0310 21:19:55.120891   13364 main.go:121] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0310 21:19:55.747601   13364 main.go:121] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0310 21:19:55.757611   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:19:56.382177   13364 main.go:121] libmachine: Using SSH client type: native
	I0310 21:19:56.383184   13364 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x149c4a0] 0x149c460 <nil>  [] 0s} 127.0.0.1 55203 <nil> <nil>}
	I0310 21:19:56.383184   13364 main.go:121] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0310 21:20:06.996088   13364 main.go:121] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-01-29 14:31:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-03-10 21:19:55.737608000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0310 21:20:06.996088   13364 machine.go:91] provisioned docker machine in 20.7210994s
	I0310 21:20:06.996088   13364 client.go:171] LocalClient.Create took 43.6692034s
	I0310 21:20:06.996475   13364 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20210310211916-6496" took 43.6695903s
	I0310 21:20:06.996475   13364 start.go:267] post-start starting for "custom-weave-20210310211916-6496" (driver="docker")
	I0310 21:20:06.996475   13364 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0310 21:20:07.012309   13364 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0310 21:20:07.020843   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:20:07.619668   13364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55203 SSHKeyPath:C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa Username:docker}
	I0310 21:20:08.103502   13364 ssh_runner.go:189] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.0911938s)
	I0310 21:20:08.119158   13364 ssh_runner.go:149] Run: cat /etc/os-release
	I0310 21:20:08.155271   13364 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0310 21:20:08.155768   13364 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0310 21:20:08.155768   13364 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0310 21:20:08.155768   13364 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0310 21:20:08.155768   13364 filesync.go:118] Scanning C:\Users\jenkins\.minikube\addons for local assets ...
	I0310 21:20:08.156219   13364 filesync.go:118] Scanning C:\Users\jenkins\.minikube\files for local assets ...
	I0310 21:20:08.159672   13364 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts -> hosts in /etc/test/nested/copy/2512
	I0310 21:20:08.160847   13364 filesync.go:141] local asset: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts -> hosts in /etc/test/nested/copy/4452
	I0310 21:20:08.178599   13364 ssh_runner.go:149] Run: sudo mkdir -p /etc/test/nested/copy/2512 /etc/test/nested/copy/4452
	I0310 21:20:08.266485   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\2512\hosts --> /etc/test/nested/copy/2512/hosts (40 bytes)
	I0310 21:20:08.478653   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\files\etc\test\nested\copy\4452\hosts --> /etc/test/nested/copy/4452/hosts (40 bytes)
	I0310 21:20:08.759287   13364 start.go:270] post-start completed in 1.7628145s
	I0310 21:20:08.788795   13364 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210310211916-6496
	I0310 21:20:09.392585   13364 profile.go:148] Saving config to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\config.json ...
	I0310 21:20:09.433218   13364 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 21:20:09.442030   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:20:10.035498   13364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55203 SSHKeyPath:C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa Username:docker}
	I0310 21:20:10.473507   13364 ssh_runner.go:189] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.0390943s)
	I0310 21:20:10.473507   13364 start.go:129] duration metric: createHost completed in 47.1637532s
	I0310 21:20:10.473507   13364 start.go:80] releasing machines lock for "custom-weave-20210310211916-6496", held for 47.1722949s
	I0310 21:20:10.482408   13364 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210310211916-6496
	I0310 21:20:11.132975   13364 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0310 21:20:11.141675   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:20:11.144345   13364 ssh_runner.go:149] Run: systemctl --version
	I0310 21:20:11.163006   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:20:11.850599   13364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55203 SSHKeyPath:C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa Username:docker}
	I0310 21:20:11.867049   13364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55203 SSHKeyPath:C:\Users\jenkins\.minikube\machines\custom-weave-20210310211916-6496\id_rsa Username:docker}
	I0310 21:20:12.387518   13364 ssh_runner.go:189] Completed: systemctl --version: (1.2431751s)
	I0310 21:20:12.397950   13364 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0310 21:20:12.783932   13364 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6506415s)
	I0310 21:20:12.799375   13364 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:20:12.980397   13364 cruntime.go:206] skipping containerd shutdown because we are bound to it
	I0310 21:20:12.993217   13364 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0310 21:20:13.101480   13364 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0310 21:20:13.425122   13364 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0310 21:20:13.541328   13364 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:20:14.720508   13364 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.1791807s)
	I0310 21:20:14.730788   13364 ssh_runner.go:149] Run: sudo systemctl start docker
	I0310 21:20:14.918712   13364 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0310 21:20:16.108983   13364 ssh_runner.go:189] Completed: docker version --format {{.Server.Version}}: (1.1902732s)
	I0310 21:20:16.113566   13364 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
	I0310 21:20:16.127580   13364 cli_runner.go:115] Run: docker exec -t custom-weave-20210310211916-6496 dig +short host.docker.internal
	I0310 21:20:17.476228   13364 cli_runner.go:168] Completed: docker exec -t custom-weave-20210310211916-6496 dig +short host.docker.internal: (1.3486498s)
	I0310 21:20:17.476699   13364 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
	I0310 21:20:17.491933   13364 ssh_runner.go:149] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0310 21:20:17.524899   13364 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:20:17.675510   13364 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20210310211916-6496
	I0310 21:20:18.411673   13364 localpath.go:92] copying C:\Users\jenkins\.minikube\client.crt -> C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\client.crt
	I0310 21:20:18.418579   13364 localpath.go:117] copying C:\Users\jenkins\.minikube\client.key -> C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\client.key
	I0310 21:20:18.426036   13364 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
	I0310 21:20:18.426036   13364 preload.go:105] Found local preload: C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
	I0310 21:20:18.434630   13364 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:20:19.415084   13364 docker.go:423] Got preloaded images: 
	I0310 21:20:19.415084   13364 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded
	I0310 21:20:19.428922   13364 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 21:20:19.542898   13364 ssh_runner.go:149] Run: which lz4
	I0310 21:20:19.667989   13364 ssh_runner.go:149] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0310 21:20:19.710149   13364 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0310 21:20:19.710469   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (515083977 bytes)
	I0310 21:21:31.223845   13364 docker.go:388] Took 71.570323 seconds to copy over tarball
	I0310 21:21:31.244793   13364 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0310 21:22:10.790555   13364 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (39.5455445s)
	I0310 21:22:10.791081   13364 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0310 21:22:12.320483   13364 ssh_runner.go:149] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0310 21:22:12.375854   13364 ssh_runner.go:316] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
	I0310 21:22:12.480425   13364 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0310 21:22:14.010615   13364 ssh_runner.go:189] Completed: sudo systemctl daemon-reload: (1.5301924s)
	I0310 21:22:14.029519   13364 ssh_runner.go:149] Run: sudo systemctl restart docker
	I0310 21:22:18.780250   13364 ssh_runner.go:189] Completed: sudo systemctl restart docker: (4.7507375s)
	I0310 21:22:18.788940   13364 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0310 21:22:19.490965   13364 docker.go:423] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.2
	k8s.gcr.io/kube-apiserver:v1.20.2
	k8s.gcr.io/kube-controller-manager:v1.20.2
	k8s.gcr.io/kube-scheduler:v1.20.2
	kubernetesui/dashboard:v2.1.0
	gcr.io/k8s-minikube/storage-provisioner:v4
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0310 21:22:19.490965   13364 cache_images.go:73] Images are preloaded, skipping loading
	I0310 21:22:19.500276   13364 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0310 21:22:21.033611   13364 ssh_runner.go:189] Completed: docker info --format {{.CgroupDriver}}: (1.5333366s)
	I0310 21:22:21.033611   13364 cni.go:74] Creating CNI manager for "testdata\\weavenet.yaml"
	I0310 21:22:21.033611   13364 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0310 21:22:21.033611   13364 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210310211916-6496 NodeName:custom-weave-20210310211916-6496 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0310 21:22:21.033611   13364 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20210310211916-6496"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0310 21:22:21.033611   13364 kubeadm.go:919] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20210310211916-6496 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=172.17.0.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210310211916-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0310 21:22:21.046453   13364 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0310 21:22:21.124212   13364 binaries.go:44] Found k8s binaries, skipping transfer
	I0310 21:22:21.137213   13364 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0310 21:22:21.190296   13364 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0310 21:22:21.461113   13364 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0310 21:22:21.681233   13364 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1858 bytes)
	I0310 21:22:21.968648   13364 ssh_runner.go:149] Run: grep 172.17.0.3	control-plane.minikube.internal$ /etc/hosts
	I0310 21:22:22.007975   13364 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0310 21:22:22.153827   13364 certs.go:52] Setting up C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496 for IP: 172.17.0.3
	I0310 21:22:22.154726   13364 certs.go:171] skipping minikubeCA CA generation: C:\Users\jenkins\.minikube\ca.key
	I0310 21:22:22.154944   13364 certs.go:171] skipping proxyClientCA CA generation: C:\Users\jenkins\.minikube\proxy-client-ca.key
	I0310 21:22:22.155575   13364 certs.go:275] skipping minikube-user signed cert generation: C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\client.key
	I0310 21:22:22.155575   13364 certs.go:279] generating minikube signed cert: C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key.0f3e66d0
	I0310 21:22:22.155575   13364 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
	I0310 21:22:22.330533   13364 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt.0f3e66d0 ...
	I0310 21:22:22.330533   13364 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt.0f3e66d0: {Name:mk5acae756c7ccf08a5abecb3d42de42a2545e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:22:22.343482   13364 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key.0f3e66d0 ...
	I0310 21:22:22.343482   13364 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key.0f3e66d0: {Name:mke141e6877e8cb0d6dc54cf3f0c258a20436c9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:22:22.357477   13364 certs.go:290] copying C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt.0f3e66d0 -> C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt
	I0310 21:22:22.366446   13364 certs.go:294] copying C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key.0f3e66d0 -> C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key
	I0310 21:22:22.368498   13364 certs.go:279] generating aggregator signed cert: C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.key
	I0310 21:22:22.368498   13364 crypto.go:69] Generating cert C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.crt with IP's: []
	I0310 21:22:22.767034   13364 crypto.go:157] Writing cert to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.crt ...
	I0310 21:22:22.768057   13364 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.crt: {Name:mk50e1c1e41ae2b5998e732bb36f4a98e1150878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:22:22.782763   13364 crypto.go:165] Writing key to C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.key ...
	I0310 21:22:22.782763   13364 lock.go:36] WriteFile acquiring C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.key: {Name:mkfae7f926198d4da9748a25691dc664331e7799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0310 21:22:22.796137   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992.pem (1338 bytes)
	W0310 21:22:22.797050   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\10992_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.797050   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140.pem (1338 bytes)
	W0310 21:22:22.797050   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1140_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.797050   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156.pem (1338 bytes)
	W0310 21:22:22.797050   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1156_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.797050   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056.pem (1338 bytes)
	W0310 21:22:22.798139   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\12056_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.798139   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476.pem (1338 bytes)
	W0310 21:22:22.798139   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1476_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.798139   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728.pem (1338 bytes)
	W0310 21:22:22.799063   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1728_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.799063   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984.pem (1338 bytes)
	W0310 21:22:22.799063   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\1984_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.799063   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232.pem (1338 bytes)
	W0310 21:22:22.799063   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\232_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.799063   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512.pem (1338 bytes)
	W0310 21:22:22.800075   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\2512_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.800075   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056.pem (1338 bytes)
	W0310 21:22:22.800075   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3056_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.800075   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516.pem (1338 bytes)
	W0310 21:22:22.801035   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3516_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.801035   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352.pem (1338 bytes)
	W0310 21:22:22.801035   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\352_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.801035   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920.pem (1338 bytes)
	W0310 21:22:22.801035   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\3920_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.802033   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052.pem (1338 bytes)
	W0310 21:22:22.802033   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4052_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.802033   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452.pem (1338 bytes)
	W0310 21:22:22.803086   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4452_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.803086   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588.pem (1338 bytes)
	W0310 21:22:22.803086   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4588_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.803086   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944.pem (1338 bytes)
	W0310 21:22:22.804029   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\4944_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.804029   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040.pem (1338 bytes)
	W0310 21:22:22.804029   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5040_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.804029   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172.pem (1338 bytes)
	W0310 21:22:22.804029   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5172_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.804029   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372.pem (1338 bytes)
	W0310 21:22:22.804029   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5372_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.804029   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396.pem (1338 bytes)
	W0310 21:22:22.804029   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5396_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.804029   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700.pem (1338 bytes)
	W0310 21:22:22.807071   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5700_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.807268   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736.pem (1338 bytes)
	W0310 21:22:22.807882   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\5736_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.808120   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368.pem (1338 bytes)
	W0310 21:22:22.808966   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6368_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.809365   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492.pem (1338 bytes)
	W0310 21:22:22.809652   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6492_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.809875   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496.pem (1338 bytes)
	W0310 21:22:22.810107   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6496_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.810107   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552.pem (1338 bytes)
	W0310 21:22:22.810107   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6552_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.810721   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692.pem (1338 bytes)
	W0310 21:22:22.810721   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6692_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.810721   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856.pem (1338 bytes)
	W0310 21:22:22.810721   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\6856_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.810721   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024.pem (1338 bytes)
	W0310 21:22:22.810721   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7024_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.811887   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160.pem (1338 bytes)
	W0310 21:22:22.811887   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7160_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.811887   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432.pem (1338 bytes)
	W0310 21:22:22.811887   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7432_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.811887   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440.pem (1338 bytes)
	W0310 21:22:22.812885   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7440_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.813023   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452.pem (1338 bytes)
	W0310 21:22:22.813307   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\7452_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.813307   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800.pem (1338 bytes)
	W0310 21:22:22.813650   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\800_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.813650   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464.pem (1338 bytes)
	W0310 21:22:22.813879   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8464_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.814119   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748.pem (1338 bytes)
	W0310 21:22:22.814358   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\8748_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.814592   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088.pem (1338 bytes)
	W0310 21:22:22.814895   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9088_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.815073   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520.pem (1338 bytes)
	W0310 21:22:22.815305   13364 certs.go:350] ignoring C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\9520_empty.pem, impossibly tiny 0 bytes
	I0310 21:22:22.815305   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca-key.pem (1679 bytes)
	I0310 21:22:22.815801   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\ca.pem (1078 bytes)
	I0310 21:22:22.816162   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\cert.pem (1123 bytes)
	I0310 21:22:22.816492   13364 certs.go:354] found cert: C:\Users\jenkins\.minikube\certs\C:\Users\jenkins\.minikube\certs\key.pem (1679 bytes)
	I0310 21:22:22.837387   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0310 21:22:23.263988   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0310 21:22:23.635324   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0310 21:22:23.818552   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\profiles\custom-weave-20210310211916-6496\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0310 21:22:24.062537   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0310 21:22:24.424135   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0310 21:22:24.640634   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0310 21:22:24.882951   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0310 21:22:25.204449   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6552.pem --> /usr/share/ca-certificates/6552.pem (1338 bytes)
	I0310 21:22:25.455567   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7432.pem --> /usr/share/ca-certificates/7432.pem (1338 bytes)
	I0310 21:22:25.706160   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5396.pem --> /usr/share/ca-certificates/5396.pem (1338 bytes)
	I0310 21:22:25.925787   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\232.pem --> /usr/share/ca-certificates/232.pem (1338 bytes)
	I0310 21:22:26.305402   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3920.pem --> /usr/share/ca-certificates/3920.pem (1338 bytes)
	I0310 21:22:26.634666   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7452.pem --> /usr/share/ca-certificates/7452.pem (1338 bytes)
	I0310 21:22:26.835996   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3516.pem --> /usr/share/ca-certificates/3516.pem (1338 bytes)
	I0310 21:22:27.050998   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8464.pem --> /usr/share/ca-certificates/8464.pem (1338 bytes)
	I0310 21:22:27.223415   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\8748.pem --> /usr/share/ca-certificates/8748.pem (1338 bytes)
	I0310 21:22:27.426543   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5040.pem --> /usr/share/ca-certificates/5040.pem (1338 bytes)
	I0310 21:22:27.654988   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\10992.pem --> /usr/share/ca-certificates/10992.pem (1338 bytes)
	I0310 21:22:27.899286   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\2512.pem --> /usr/share/ca-certificates/2512.pem (1338 bytes)
	I0310 21:22:28.100239   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7440.pem --> /usr/share/ca-certificates/7440.pem (1338 bytes)
	I0310 21:22:28.282342   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5736.pem --> /usr/share/ca-certificates/5736.pem (1338 bytes)
	I0310 21:22:28.471855   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7024.pem --> /usr/share/ca-certificates/7024.pem (1338 bytes)
	I0310 21:22:28.747259   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5700.pem --> /usr/share/ca-certificates/5700.pem (1338 bytes)
	I0310 21:22:29.089713   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1140.pem --> /usr/share/ca-certificates/1140.pem (1338 bytes)
	I0310 21:22:29.327793   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5372.pem --> /usr/share/ca-certificates/5372.pem (1338 bytes)
	I0310 21:22:29.618239   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6496.pem --> /usr/share/ca-certificates/6496.pem (1338 bytes)
	I0310 21:22:30.004114   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9088.pem --> /usr/share/ca-certificates/9088.pem (1338 bytes)
	I0310 21:22:30.346341   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\7160.pem --> /usr/share/ca-certificates/7160.pem (1338 bytes)
	I0310 21:22:30.821502   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4452.pem --> /usr/share/ca-certificates/4452.pem (1338 bytes)
	I0310 21:22:31.192475   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\9520.pem --> /usr/share/ca-certificates/9520.pem (1338 bytes)
	I0310 21:22:31.656161   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1984.pem --> /usr/share/ca-certificates/1984.pem (1338 bytes)
	I0310 21:22:32.325854   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4052.pem --> /usr/share/ca-certificates/4052.pem (1338 bytes)
	I0310 21:22:32.911695   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6368.pem --> /usr/share/ca-certificates/6368.pem (1338 bytes)
	I0310 21:22:33.337187   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6856.pem --> /usr/share/ca-certificates/6856.pem (1338 bytes)
	I0310 21:22:33.963719   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\800.pem --> /usr/share/ca-certificates/800.pem (1338 bytes)
	I0310 21:22:34.499332   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\5172.pem --> /usr/share/ca-certificates/5172.pem (1338 bytes)
	I0310 21:22:34.874789   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1476.pem --> /usr/share/ca-certificates/1476.pem (1338 bytes)
	I0310 21:22:35.422459   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\3056.pem --> /usr/share/ca-certificates/3056.pem (1338 bytes)
	I0310 21:22:36.389758   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\12056.pem --> /usr/share/ca-certificates/12056.pem (1338 bytes)
	I0310 21:22:36.933562   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6492.pem --> /usr/share/ca-certificates/6492.pem (1338 bytes)
	I0310 21:22:37.412611   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0310 21:22:38.178541   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1728.pem --> /usr/share/ca-certificates/1728.pem (1338 bytes)
	I0310 21:22:38.615336   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4944.pem --> /usr/share/ca-certificates/4944.pem (1338 bytes)
	I0310 21:22:39.168104   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\4588.pem --> /usr/share/ca-certificates/4588.pem (1338 bytes)
	I0310 21:22:39.689450   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\6692.pem --> /usr/share/ca-certificates/6692.pem (1338 bytes)
	I0310 21:22:40.165689   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\1156.pem --> /usr/share/ca-certificates/1156.pem (1338 bytes)
	I0310 21:22:40.695582   13364 ssh_runner.go:316] scp C:\Users\jenkins\.minikube\certs\352.pem --> /usr/share/ca-certificates/352.pem (1338 bytes)
	I0310 21:22:41.165600   13364 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0310 21:22:41.552859   13364 ssh_runner.go:149] Run: openssl version
	I0310 21:22:41.777105   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4588.pem && ln -fs /usr/share/ca-certificates/4588.pem /etc/ssl/certs/4588.pem"
	I0310 21:22:41.926753   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4588.pem
	I0310 21:22:41.986600   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  3 21:41 /usr/share/ca-certificates/4588.pem
	I0310 21:22:41.997922   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4588.pem
	I0310 21:22:42.063601   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4588.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:42.163300   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6692.pem && ln -fs /usr/share/ca-certificates/6692.pem /etc/ssl/certs/6692.pem"
	I0310 21:22:42.384078   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6692.pem
	I0310 21:22:42.446697   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 20:42 /usr/share/ca-certificates/6692.pem
	I0310 21:22:42.466491   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6692.pem
	I0310 21:22:42.535119   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6692.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:42.610161   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1156.pem && ln -fs /usr/share/ca-certificates/1156.pem /etc/ssl/certs/1156.pem"
	I0310 21:22:42.722635   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1156.pem
	I0310 21:22:42.760428   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 00:26 /usr/share/ca-certificates/1156.pem
	I0310 21:22:42.774716   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1156.pem
	I0310 21:22:42.926802   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1156.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:43.085994   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/352.pem && ln -fs /usr/share/ca-certificates/352.pem /etc/ssl/certs/352.pem"
	I0310 21:22:43.269629   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/352.pem
	I0310 21:22:43.374650   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 12 14:51 /usr/share/ca-certificates/352.pem
	I0310 21:22:43.384874   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/352.pem
	I0310 21:22:43.484981   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/352.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:43.781843   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7452.pem && ln -fs /usr/share/ca-certificates/7452.pem /etc/ssl/certs/7452.pem"
	I0310 21:22:43.988511   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7452.pem
	I0310 21:22:44.129041   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 20 00:41 /usr/share/ca-certificates/7452.pem
	I0310 21:22:44.144334   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7452.pem
	I0310 21:22:44.218116   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7452.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:44.343619   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6552.pem && ln -fs /usr/share/ca-certificates/6552.pem /etc/ssl/certs/6552.pem"
	I0310 21:22:44.536578   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6552.pem
	I0310 21:22:44.588918   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 19 22:08 /usr/share/ca-certificates/6552.pem
	I0310 21:22:44.603238   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6552.pem
	I0310 21:22:44.696050   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6552.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:44.819524   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7432.pem && ln -fs /usr/share/ca-certificates/7432.pem /etc/ssl/certs/7432.pem"
	I0310 21:22:45.093945   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7432.pem
	I0310 21:22:45.148696   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:58 /usr/share/ca-certificates/7432.pem
	I0310 21:22:45.157744   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7432.pem
	I0310 21:22:45.276300   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7432.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:45.392769   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5396.pem && ln -fs /usr/share/ca-certificates/5396.pem /etc/ssl/certs/5396.pem"
	I0310 21:22:45.605992   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5396.pem
	I0310 21:22:45.661228   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  8 23:38 /usr/share/ca-certificates/5396.pem
	I0310 21:22:45.680881   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5396.pem
	I0310 21:22:45.810710   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5396.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:45.951546   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/232.pem && ln -fs /usr/share/ca-certificates/232.pem /etc/ssl/certs/232.pem"
	I0310 21:22:46.250690   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/232.pem
	I0310 21:22:46.301037   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 28 02:13 /usr/share/ca-certificates/232.pem
	I0310 21:22:46.308977   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/232.pem
	I0310 21:22:46.392784   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/232.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:46.656074   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3920.pem && ln -fs /usr/share/ca-certificates/3920.pem /etc/ssl/certs/3920.pem"
	I0310 21:22:46.760290   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3920.pem
	I0310 21:22:46.804172   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 22:06 /usr/share/ca-certificates/3920.pem
	I0310 21:22:46.806402   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3920.pem
	I0310 21:22:46.945468   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3920.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:47.146473   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2512.pem && ln -fs /usr/share/ca-certificates/2512.pem /etc/ssl/certs/2512.pem"
	I0310 21:22:47.361609   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2512.pem
	I0310 21:22:47.412987   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:32 /usr/share/ca-certificates/2512.pem
	I0310 21:22:47.425300   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2512.pem
	I0310 21:22:47.507357   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2512.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:47.757400   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3516.pem && ln -fs /usr/share/ca-certificates/3516.pem /etc/ssl/certs/3516.pem"
	I0310 21:22:47.892814   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3516.pem
	I0310 21:22:47.996452   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 19:10 /usr/share/ca-certificates/3516.pem
	I0310 21:22:48.009865   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3516.pem
	I0310 21:22:48.118211   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3516.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:48.264529   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8464.pem && ln -fs /usr/share/ca-certificates/8464.pem /etc/ssl/certs/8464.pem"
	I0310 21:22:48.436557   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8464.pem
	I0310 21:22:48.543497   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 15 02:32 /usr/share/ca-certificates/8464.pem
	I0310 21:22:48.553910   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8464.pem
	I0310 21:22:48.626779   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8464.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:48.830596   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8748.pem && ln -fs /usr/share/ca-certificates/8748.pem /etc/ssl/certs/8748.pem"
	I0310 21:22:49.030465   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/8748.pem
	I0310 21:22:49.078647   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 19:09 /usr/share/ca-certificates/8748.pem
	I0310 21:22:49.086240   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8748.pem
	I0310 21:22:49.166763   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8748.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:49.318888   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5040.pem && ln -fs /usr/share/ca-certificates/5040.pem /etc/ssl/certs/5040.pem"
	I0310 21:22:49.476594   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5040.pem
	I0310 21:22:49.515252   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 08:36 /usr/share/ca-certificates/5040.pem
	I0310 21:22:49.527649   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5040.pem
	I0310 21:22:49.611517   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5040.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:49.826922   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10992.pem && ln -fs /usr/share/ca-certificates/10992.pem /etc/ssl/certs/10992.pem"
	I0310 21:22:50.010711   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/10992.pem
	I0310 21:22:50.051992   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 21:44 /usr/share/ca-certificates/10992.pem
	I0310 21:22:50.063484   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10992.pem
	I0310 21:22:50.179127   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10992.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:50.400879   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7440.pem && ln -fs /usr/share/ca-certificates/7440.pem /etc/ssl/certs/7440.pem"
	I0310 21:22:50.688255   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7440.pem
	I0310 21:22:50.784984   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 13 14:39 /usr/share/ca-certificates/7440.pem
	I0310 21:22:50.797007   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7440.pem
	I0310 21:22:50.913124   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7440.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:51.138628   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5736.pem && ln -fs /usr/share/ca-certificates/5736.pem /etc/ssl/certs/5736.pem"
	I0310 21:22:51.528521   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5736.pem
	I0310 21:22:51.579284   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 25 23:18 /usr/share/ca-certificates/5736.pem
	I0310 21:22:51.584040   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5736.pem
	I0310 21:22:51.676314   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5736.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:51.820975   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7024.pem && ln -fs /usr/share/ca-certificates/7024.pem /etc/ssl/certs/7024.pem"
	I0310 21:22:51.958989   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7024.pem
	I0310 21:22:52.039677   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 23:11 /usr/share/ca-certificates/7024.pem
	I0310 21:22:52.050090   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7024.pem
	I0310 21:22:52.119895   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7024.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:52.266176   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5700.pem && ln -fs /usr/share/ca-certificates/5700.pem /etc/ssl/certs/5700.pem"
	I0310 21:22:52.418489   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5700.pem
	I0310 21:22:52.501201   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  1 19:58 /usr/share/ca-certificates/5700.pem
	I0310 21:22:52.504573   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5700.pem
	I0310 21:22:52.574585   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5700.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:52.740920   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1140.pem && ln -fs /usr/share/ca-certificates/1140.pem /etc/ssl/certs/1140.pem"
	I0310 21:22:52.954776   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1140.pem
	I0310 21:22:53.001390   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 20 02:25 /usr/share/ca-certificates/1140.pem
	I0310 21:22:53.012934   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1140.pem
	I0310 21:22:53.111366   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1140.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:53.236972   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5372.pem && ln -fs /usr/share/ca-certificates/5372.pem /etc/ssl/certs/5372.pem"
	I0310 21:22:53.378106   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5372.pem
	I0310 21:22:53.464106   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 23 00:40 /usr/share/ca-certificates/5372.pem
	I0310 21:22:53.473192   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5372.pem
	I0310 21:22:53.577350   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5372.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:53.914941   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6496.pem && ln -fs /usr/share/ca-certificates/6496.pem /etc/ssl/certs/6496.pem"
	I0310 21:22:54.197456   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6496.pem
	I0310 21:22:54.248004   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar 10 19:16 /usr/share/ca-certificates/6496.pem
	I0310 21:22:54.250832   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6496.pem
	I0310 21:22:54.313756   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6496.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:54.604905   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9088.pem && ln -fs /usr/share/ca-certificates/9088.pem /etc/ssl/certs/9088.pem"
	I0310 21:22:54.868486   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9088.pem
	I0310 21:22:54.948334   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  7 00:22 /usr/share/ca-certificates/9088.pem
	I0310 21:22:54.959678   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9088.pem
	I0310 21:22:55.057177   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9088.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:55.193122   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7160.pem && ln -fs /usr/share/ca-certificates/7160.pem /etc/ssl/certs/7160.pem"
	I0310 21:22:55.436221   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/7160.pem
	I0310 21:22:55.492049   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 12 04:51 /usr/share/ca-certificates/7160.pem
	I0310 21:22:55.511433   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7160.pem
	I0310 21:22:55.645642   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7160.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:55.791420   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4452.pem && ln -fs /usr/share/ca-certificates/4452.pem /etc/ssl/certs/4452.pem"
	I0310 21:22:55.937298   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4452.pem
	I0310 21:22:56.052840   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 22 23:48 /usr/share/ca-certificates/4452.pem
	I0310 21:22:56.072405   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4452.pem
	I0310 21:22:56.137091   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4452.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:56.294837   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/800.pem && ln -fs /usr/share/ca-certificates/800.pem /etc/ssl/certs/800.pem"
	I0310 21:22:56.489389   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/800.pem
	I0310 21:22:56.522837   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 24 01:48 /usr/share/ca-certificates/800.pem
	I0310 21:22:56.528162   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/800.pem
	I0310 21:22:56.608312   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/800.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:56.789980   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9520.pem && ln -fs /usr/share/ca-certificates/9520.pem /etc/ssl/certs/9520.pem"
	I0310 21:22:56.905970   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9520.pem
	I0310 21:22:56.987596   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Feb 19 14:54 /usr/share/ca-certificates/9520.pem
	I0310 21:22:57.004702   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9520.pem
	I0310 21:22:57.169006   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9520.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:57.304936   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1984.pem && ln -fs /usr/share/ca-certificates/1984.pem /etc/ssl/certs/1984.pem"
	I0310 21:22:57.422484   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1984.pem
	I0310 21:22:57.487023   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 21:55 /usr/share/ca-certificates/1984.pem
	I0310 21:22:57.499727   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1984.pem
	I0310 21:22:57.626170   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1984.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:57.969142   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4052.pem && ln -fs /usr/share/ca-certificates/4052.pem /etc/ssl/certs/4052.pem"
	I0310 21:22:58.179902   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4052.pem
	I0310 21:22:58.226961   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  4 18:40 /usr/share/ca-certificates/4052.pem
	I0310 21:22:58.230416   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4052.pem
	I0310 21:22:58.346611   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4052.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:58.537128   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6368.pem && ln -fs /usr/share/ca-certificates/6368.pem /etc/ssl/certs/6368.pem"
	I0310 21:22:58.712574   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6368.pem
	I0310 21:22:58.750098   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 14 19:11 /usr/share/ca-certificates/6368.pem
	I0310 21:22:58.758607   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6368.pem
	I0310 21:22:58.881306   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6368.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:58.975615   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6856.pem && ln -fs /usr/share/ca-certificates/6856.pem /etc/ssl/certs/6856.pem"
	I0310 21:22:59.235982   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6856.pem
	I0310 21:22:59.370660   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 00:21 /usr/share/ca-certificates/6856.pem
	I0310 21:22:59.387562   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6856.pem
	I0310 21:22:59.488020   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6856.pem /etc/ssl/certs/51391683.0"
	I0310 21:22:59.758887   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1728.pem && ln -fs /usr/share/ca-certificates/1728.pem /etc/ssl/certs/1728.pem"
	I0310 21:22:59.992730   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1728.pem
	I0310 21:23:00.115413   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 20:57 /usr/share/ca-certificates/1728.pem
	I0310 21:23:00.126577   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1728.pem
	I0310 21:23:00.236295   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1728.pem /etc/ssl/certs/51391683.0"
	I0310 21:23:00.311321   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4944.pem && ln -fs /usr/share/ca-certificates/4944.pem /etc/ssl/certs/4944.pem"
	I0310 21:23:00.425330   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/4944.pem
	I0310 21:23:00.491633   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  9 23:40 /usr/share/ca-certificates/4944.pem
	I0310 21:23:00.503334   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4944.pem
	I0310 21:23:00.636170   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4944.pem /etc/ssl/certs/51391683.0"
	I0310 21:23:00.811090   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5172.pem && ln -fs /usr/share/ca-certificates/5172.pem /etc/ssl/certs/5172.pem"
	I0310 21:23:00.988716   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/5172.pem
	I0310 21:23:01.078451   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan 26 21:25 /usr/share/ca-certificates/5172.pem
	I0310 21:23:01.089514   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5172.pem
	I0310 21:23:01.295502   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5172.pem /etc/ssl/certs/51391683.0"
	I0310 21:23:01.475126   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1476.pem && ln -fs /usr/share/ca-certificates/1476.pem /etc/ssl/certs/1476.pem"
	I0310 21:23:01.723637   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/1476.pem
	I0310 21:23:01.784616   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:50 /usr/share/ca-certificates/1476.pem
	I0310 21:23:01.814956   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1476.pem
	I0310 21:23:01.940157   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1476.pem /etc/ssl/certs/51391683.0"
	I0310 21:23:02.129724   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3056.pem && ln -fs /usr/share/ca-certificates/3056.pem /etc/ssl/certs/3056.pem"
	I0310 21:23:02.282340   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/3056.pem
	I0310 21:23:02.331168   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  5 23:11 /usr/share/ca-certificates/3056.pem
	I0310 21:23:02.344425   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3056.pem
	I0310 21:23:02.482957   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3056.pem /etc/ssl/certs/51391683.0"
	I0310 21:23:02.713632   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12056.pem && ln -fs /usr/share/ca-certificates/12056.pem /etc/ssl/certs/12056.pem"
	I0310 21:23:02.849401   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/12056.pem
	I0310 21:23:02.919448   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Mar  6 07:21 /usr/share/ca-certificates/12056.pem
	I0310 21:23:02.921042   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12056.pem
	I0310 21:23:03.026805   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12056.pem /etc/ssl/certs/51391683.0"
	I0310 21:23:03.176645   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6492.pem && ln -fs /usr/share/ca-certificates/6492.pem /etc/ssl/certs/6492.pem"
	I0310 21:23:03.554205   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6492.pem
	I0310 21:23:03.596336   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1338 Jan  6 01:11 /usr/share/ca-certificates/6492.pem
	I0310 21:23:03.614560   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6492.pem
	I0310 21:23:03.827987   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6492.pem /etc/ssl/certs/51391683.0"
	I0310 21:23:04.022880   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0310 21:23:04.158868   13364 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:23:04.288603   13364 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Jan  5 23:33 /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:23:04.306699   13364 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0310 21:23:04.434596   13364 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0310 21:23:04.602229   13364 kubeadm.go:385] StartCluster: {Name:custom-weave-20210310211916-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:custom-weave-20210310211916-6496 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 21:23:04.612028   13364 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:23:05.430000   13364 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0310 21:23:05.622643   13364 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0310 21:23:05.818781   13364 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:23:05.831812   13364 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:23:06.198354   13364 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:23:06.198674   13364 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:27:55.875430   13364 out.go:150]   - Generating certificates and keys ...
	I0310 21:27:55.888171   13364 out.go:150]   - Booting up control plane ...
	W0310 21:27:55.921945   13364 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [custom-weave-20210310211916-6496 localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [custom-weave-20210310211916-6496 localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [custom-weave-20210310211916-6496 localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [custom-weave-20210310211916-6496 localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0310 21:27:55.922556   13364 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0310 21:29:20.441413   13364 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1m24.5191686s)
	I0310 21:29:20.455920   13364 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0310 21:29:20.625916   13364 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0310 21:29:21.493676   13364 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver
	I0310 21:29:21.508041   13364 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0310 21:29:21.774904   13364 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0310 21:29:21.775204   13364 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0310 21:34:08.032415   13364 out.go:150]   - Generating certificates and keys ...
	I0310 21:34:08.037609   13364 out.go:150]   - Booting up control plane ...
	I0310 21:34:08.083051   13364 kubeadm.go:387] StartCluster complete in 11m3.4832321s
	I0310 21:34:08.091852   13364 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0310 21:34:12.623276   13364 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: (4.531141s)
	I0310 21:34:12.623583   13364 logs.go:255] 1 containers: [ce297b30a4b9]
	I0310 21:34:12.632629   13364 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0310 21:34:20.601240   13364 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: (7.9686278s)
	I0310 21:34:20.601577   13364 logs.go:255] 1 containers: [3fe9587784db]
	I0310 21:34:20.610326   13364 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0310 21:34:26.210434   13364 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: (5.5995965s)
	I0310 21:34:26.210434   13364 logs.go:255] 0 containers: []
	W0310 21:34:26.210434   13364 logs.go:257] No container was found matching "coredns"
	I0310 21:34:26.217684   13364 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0310 21:34:29.848108   13364 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: (3.6304308s)
	I0310 21:34:29.848108   13364 logs.go:255] 1 containers: [5bb71c65d273]
	I0310 21:34:29.856742   13364 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0310 21:34:33.954089   13364 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: (4.0970651s)
	I0310 21:34:33.954089   13364 logs.go:255] 0 containers: []
	W0310 21:34:33.954089   13364 logs.go:257] No container was found matching "kube-proxy"
	I0310 21:34:33.961609   13364 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0310 21:34:38.636934   13364 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: (4.6744009s)
	I0310 21:34:38.636934   13364 logs.go:255] 0 containers: []
	W0310 21:34:38.637288   13364 logs.go:257] No container was found matching "kubernetes-dashboard"
	I0310 21:34:38.655092   13364 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0310 21:34:44.275282   13364 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: (5.6202007s)
	I0310 21:34:44.275282   13364 logs.go:255] 0 containers: []
	W0310 21:34:44.275282   13364 logs.go:257] No container was found matching "storage-provisioner"
	I0310 21:34:44.283179   13364 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0310 21:34:47.468846   13364 ssh_runner.go:189] Completed: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: (3.1856722s)
	I0310 21:34:47.468846   13364 logs.go:255] 1 containers: [49e8f0949543]
	I0310 21:34:47.468846   13364 logs.go:122] Gathering logs for etcd [3fe9587784db] ...
	I0310 21:34:47.468846   13364 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 3fe9587784db"
	I0310 21:34:49.174111   13364 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 3fe9587784db": (1.7052683s)
	I0310 21:34:49.204688   13364 logs.go:122] Gathering logs for kube-controller-manager [49e8f0949543] ...
	I0310 21:34:49.204782   13364 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 49e8f0949543"
	I0310 21:34:52.345961   13364 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 49e8f0949543": (3.1411848s)
	I0310 21:34:52.352304   13364 logs.go:122] Gathering logs for dmesg ...
	I0310 21:34:52.352594   13364 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0310 21:34:53.334996   13364 logs.go:122] Gathering logs for describe nodes ...
	I0310 21:34:53.334996   13364 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0310 21:35:23.094964   13364 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (29.7600228s)
	I0310 21:35:23.101536   13364 logs.go:122] Gathering logs for kube-scheduler [5bb71c65d273] ...
	I0310 21:35:23.102125   13364 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 5bb71c65d273"
	I0310 21:35:29.053905   13364 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 5bb71c65d273": (5.9517902s)
	I0310 21:35:29.082016   13364 logs.go:122] Gathering logs for Docker ...
	I0310 21:35:29.082016   13364 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0310 21:35:30.411112   13364 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u docker -n 400": (1.329099s)
	I0310 21:35:30.425190   13364 logs.go:122] Gathering logs for container status ...
	I0310 21:35:30.425190   13364 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0310 21:35:31.947525   13364 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.5223376s)
	I0310 21:35:31.948377   13364 logs.go:122] Gathering logs for kubelet ...
	I0310 21:35:31.948377   13364 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0310 21:35:33.266057   13364 ssh_runner.go:189] Completed: /bin/bash -c "sudo journalctl -u kubelet -n 400": (1.3176826s)
	I0310 21:35:33.344866   13364 logs.go:122] Gathering logs for kube-apiserver [ce297b30a4b9] ...
	I0310 21:35:33.344866   13364 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 ce297b30a4b9"
	I0310 21:35:39.557280   13364 ssh_runner.go:189] Completed: /bin/bash -c "docker logs --tail 400 ce297b30a4b9": (6.212425s)
	W0310 21:35:39.591937   13364 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0310 21:35:39.592887   13364 out.go:191] * 
	* 
	W0310 21:35:39.592887   13364 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 21:35:39.592887   13364 out.go:191] * 
	* 
	W0310 21:35:39.592887   13364 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue:
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	W0310 21:35:39.592887   13364 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
	  - https://github.com/kubernetes/minikube/issues/new/choose
	I0310 21:35:39.613902   13364 out.go:129] 
	W0310 21:35:39.614952   13364 out.go:191] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0310 21:35:39.615949   13364 out.go:191] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0310 21:35:39.615949   13364 out.go:191] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0310 21:35:39.620189   13364 out.go:129] 

                                                
                                                
** /stderr **
net_test.go:82: failed start: exit status 109
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (983.86s)

                                                
                                    

Test pass (127/176)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 5.97
4 TestDownloadOnly/v1.14.0/preload-exists 0
6 TestDownloadOnly/v1.14.0/binaries 0
7 TestDownloadOnly/v1.14.0/kubectl 0
9 TestDownloadOnly/v1.20.2/json-events 4.77
10 TestDownloadOnly/v1.20.2/preload-exists 0
12 TestDownloadOnly/v1.20.2/binaries 0
13 TestDownloadOnly/v1.20.2/kubectl 0
15 TestDownloadOnly/v1.20.5-rc.0/json-events 4.74
16 TestDownloadOnly/v1.20.5-rc.0/preload-exists 0
18 TestDownloadOnly/v1.20.5-rc.0/binaries 0
19 TestDownloadOnly/v1.20.5-rc.0/kubectl 0
20 TestDownloadOnly/DeleteAll 4.79
21 TestDownloadOnly/DeleteAlwaysSucceeds 2.87
22 TestDownloadOnlyKic 44.61
27 TestAddons/parallel/Ingress 36.72
28 TestAddons/parallel/MetricsServer 68.11
29 TestAddons/parallel/HelmTiller 59.41
31 TestAddons/parallel/CSI 148.61
32 TestAddons/parallel/GCPAuth 60.94
43 TestFunctional/serial/CopySyncFile 0.01
44 TestFunctional/serial/StartWithProxy 234.77
45 TestFunctional/serial/AuditLog 0.03
46 TestFunctional/serial/SoftStart 32.73
47 TestFunctional/serial/KubeContext 0.15
48 TestFunctional/serial/KubectlGetPods 0.61
51 TestFunctional/serial/CacheCmd/cache/add_remote 10.38
52 TestFunctional/serial/CacheCmd/cache/add_local 3.81
53 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.49
54 TestFunctional/serial/CacheCmd/cache/list 0.4
55 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 2.72
56 TestFunctional/serial/CacheCmd/cache/cache_reload 21.98
57 TestFunctional/serial/CacheCmd/cache/delete 0.83
58 TestFunctional/serial/MinikubeKubectlCmd 1.73
60 TestFunctional/serial/ExtraConfig 163.87
61 TestFunctional/serial/ComponentHealth 0.27
63 TestFunctional/parallel/ConfigCmd 3.46
65 TestFunctional/parallel/DryRun 9.44
66 TestFunctional/parallel/StatusCmd 16.7
67 TestFunctional/parallel/LogsCmd 30.85
71 TestFunctional/parallel/AddonsCmd 2.03
72 TestFunctional/parallel/PersistentVolumeClaim 128.98
74 TestFunctional/parallel/SSHCmd 9.81
75 TestFunctional/parallel/MySQL 106.33
76 TestFunctional/parallel/FileSync 5.6
77 TestFunctional/parallel/CertSync 14.03
80 TestFunctional/parallel/NodeLabels 0.74
81 TestFunctional/parallel/LoadImage 22.78
83 TestFunctional/parallel/UpdateContextCmd/no_changes 2.16
84 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.03
85 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 1.94
86 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.07
88 TestFunctional/parallel/ProfileCmd/profile_not_create 5.47
89 TestFunctional/parallel/ProfileCmd/profile_list 6.21
90 TestFunctional/parallel/ProfileCmd/profile_json_output 5.77
91 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.25
96 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
100 TestJSONOutput/start/Audit 0
102 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
103 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
105 TestJSONOutput/pause/Audit 0
107 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
108 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
110 TestJSONOutput/unpause/Audit 0
112 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
113 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
115 TestJSONOutput/stop/Audit 0
117 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
118 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
119 TestErrorJSONOutput 3.17
121 TestKicCustomNetwork/create_custom_network 180.78
122 TestKicCustomNetwork/use_default_bridge_network 177.88
123 TestKicExistingNetwork 178.46
124 TestMainNoArgs 0.35
127 TestMultiNode/serial/FreshStart2Nodes 356.05
128 TestMultiNode/serial/AddNode 115.95
129 TestMultiNode/serial/ProfileList 2.85
130 TestMultiNode/serial/StopNode 14.46
131 TestMultiNode/serial/StartAfterStop 48.47
132 TestMultiNode/serial/DeleteNode 23.83
133 TestMultiNode/serial/StopMultiNode 21.02
135 TestMultiNode/serial/ValidateNameConflict 195.83
140 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
141 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 0
143 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
144 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 0
146 TestDebPackageInstall/install_amd64_debian:10/minikube 0
147 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 0
149 TestDebPackageInstall/install_amd64_debian:9/minikube 0
150 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 0
152 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
153 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 0
155 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
156 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 0
158 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
159 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 0
161 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
162 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 0
163 TestPreload 342.59
164 TestScheduledStopWindows 210.02
168 TestInsufficientStorage 39.8
204 TestStartStop/group/old-k8s-version/serial/Stop 21.82
205 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 2.12
207 TestStartStop/group/embed-certs/serial/DeployApp 199
208 TestStoppedBinaryUpgrade/MinikubeLogs 88.4
210 TestStartStop/group/embed-certs/serial/Stop 35.89
212 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 2.47
216 TestStartStop/group/no-preload/serial/Stop 34.17
218 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 2.57
222 TestNetworkPlugins/group/enable-default-cni/Start 1241.44
224 TestNetworkPlugins/group/bridge/Start 793.04
225 TestNetworkPlugins/group/kubenet/Start 715.27
226 TestNetworkPlugins/group/bridge/KubeletFlags 3.53
227 TestNetworkPlugins/group/bridge/NetCatPod 61.64
228 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 3.74
229 TestNetworkPlugins/group/enable-default-cni/NetCatPod 58.34
230 TestNetworkPlugins/group/bridge/DNS 1.36
231 TestNetworkPlugins/group/bridge/Localhost 3.75
232 TestNetworkPlugins/group/kubenet/KubeletFlags 3.94
233 TestNetworkPlugins/group/bridge/HairPin 2.6
234 TestNetworkPlugins/group/kubenet/NetCatPod 37.46
235 TestNetworkPlugins/group/enable-default-cni/DNS 1.06
236 TestNetworkPlugins/group/enable-default-cni/Localhost 0.78
237 TestNetworkPlugins/group/enable-default-cni/HairPin 0.91
238 TestNetworkPlugins/group/kubenet/DNS 1.12
239 TestNetworkPlugins/group/kubenet/Localhost 1.58
240 TestNetworkPlugins/group/kubenet/HairPin 1.08
x
+
TestDownloadOnly/v1.14.0/json-events (5.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210310190420-6496 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:68: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210310190420-6496 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker: (5.9652609s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (5.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
--- PASS: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
--- PASS: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/json-events (4.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/json-events
aaa_download_only_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210310190420-6496 --force --alsologtostderr --kubernetes-version=v1.20.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:68: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210310190420-6496 --force --alsologtostderr --kubernetes-version=v1.20.2 --container-runtime=docker --driver=docker: (4.7707781s)
--- PASS: TestDownloadOnly/v1.20.2/json-events (4.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/preload-exists
--- PASS: TestDownloadOnly/v1.20.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/binaries
--- PASS: TestDownloadOnly/v1.20.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/kubectl
--- PASS: TestDownloadOnly/v1.20.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.5-rc.0/json-events (4.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.5-rc.0/json-events
aaa_download_only_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210310190420-6496 --force --alsologtostderr --kubernetes-version=v1.20.5-rc.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:68: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20210310190420-6496 --force --alsologtostderr --kubernetes-version=v1.20.5-rc.0 --container-runtime=docker --driver=docker: (4.7421313s)
--- PASS: TestDownloadOnly/v1.20.5-rc.0/json-events (4.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.5-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.5-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.5-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.5-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.5-rc.0/binaries
--- PASS: TestDownloadOnly/v1.20.5-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.5-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.5-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.20.5-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (4.79s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (4.7893947s)
--- PASS: TestDownloadOnly/DeleteAll (4.79s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (2.87s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20210310190420-6496
aaa_download_only_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20210310190420-6496: (2.8660409s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (2.87s)

                                                
                                    
x
+
TestDownloadOnlyKic (44.61s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20210310190446-6496 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20210310190446-6496 --force --alsologtostderr --driver=docker: (37.7543801s)
helpers_test.go:171: Cleaning up "download-docker-20210310190446-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20210310190446-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20210310190446-6496: (3.337036s)
--- PASS: TestDownloadOnlyKic (44.61s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (36.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:129: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:335: "ingress-nginx-admission-create-ddzhv" [c22158f6-3895-44d3-97da-84db9e401d06] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:129: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 118.0033ms
addons_test.go:134: (dbg) Run:  kubectl --context addons-20210310190531-6496 replace --force -f testdata\nginx-ing.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:134: (dbg) Done: kubectl --context addons-20210310190531-6496 replace --force -f testdata\nginx-ing.yaml: (2.0779659s)
addons_test.go:139: kubectl --context addons-20210310190531-6496 replace --force -f testdata\nginx-ing.yaml: unexpected stderr: Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
(may be temporary)

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:148: (dbg) Run:  kubectl --context addons-20210310190531-6496 replace --force -f testdata\nginx-pod-svc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:148: (dbg) Done: kubectl --context addons-20210310190531-6496 replace --force -f testdata\nginx-pod-svc.yaml: (1.1650116s)
addons_test.go:153: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:335: "nginx" [ce4405b7-dc1c-4613-bc1f-e7c90eb5ee59] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:335: "nginx" [ce4405b7-dc1c-4613-bc1f-e7c90eb5ee59] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:153: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 22.0518411s
addons_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210310190531-6496 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210310190531-6496 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (3.9149308s)
addons_test.go:193: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:193: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable ingress --alsologtostderr -v=1: (6.9290436s)
--- PASS: TestAddons/parallel/Ingress (36.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (68.11s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:287: metrics-server stabilized in 88.0216ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:289: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:335: "metrics-server-56c4f8c9d6-nsvnx" [09bd2649-17fd-430a-a83a-86bd466f716c] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:289: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.1331785s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:295: (dbg) Run:  kubectl --context addons-20210310190531-6496 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:295: (dbg) Non-zero exit: kubectl --context addons-20210310190531-6496 top pods -n kube-system: exit status 1 (861.0056ms)

                                                
                                                
** stderr ** 
	W0310 19:13:16.917625    7496 top_pod.go:265] Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 5m18.9168109s
	error: Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 5m18.9168109s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:295: (dbg) Run:  kubectl --context addons-20210310190531-6496 top pods -n kube-system
addons_test.go:295: (dbg) Non-zero exit: kubectl --context addons-20210310190531-6496 top pods -n kube-system: exit status 1 (306.0778ms)

                                                
                                                
** stderr ** 
	W0310 19:13:20.545098    8368 top_pod.go:265] Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 5m22.5450983s
	error: Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 5m22.5450983s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:295: (dbg) Run:  kubectl --context addons-20210310190531-6496 top pods -n kube-system
addons_test.go:295: (dbg) Non-zero exit: kubectl --context addons-20210310190531-6496 top pods -n kube-system: exit status 1 (357.3885ms)

                                                
                                                
** stderr ** 
	W0310 19:13:27.394101    4152 top_pod.go:265] Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 5m29.3941018s
	error: Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 5m29.3941018s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:295: (dbg) Run:  kubectl --context addons-20210310190531-6496 top pods -n kube-system
addons_test.go:295: (dbg) Non-zero exit: kubectl --context addons-20210310190531-6496 top pods -n kube-system: exit status 1 (428.7605ms)

                                                
                                                
** stderr ** 
	W0310 19:13:35.686922    8024 top_pod.go:265] Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 5m37.6869221s
	error: Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 5m37.6869221s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:295: (dbg) Run:  kubectl --context addons-20210310190531-6496 top pods -n kube-system
addons_test.go:295: (dbg) Non-zero exit: kubectl --context addons-20210310190531-6496 top pods -n kube-system: exit status 1 (700.948ms)

                                                
                                                
** stderr ** 
	W0310 19:13:45.897882    2128 top_pod.go:265] Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 5m47.8978828s
	error: Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 5m47.8978828s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:295: (dbg) Run:  kubectl --context addons-20210310190531-6496 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:295: (dbg) Non-zero exit: kubectl --context addons-20210310190531-6496 top pods -n kube-system: exit status 1 (373.9699ms)

                                                
                                                
** stderr ** 
	W0310 19:14:00.322064    8356 top_pod.go:265] Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 6m2.3220641s
	error: Metrics not available for pod kube-system/coredns-74ff55c5b-gnp8m, age: 6m2.3220641s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:295: (dbg) Run:  kubectl --context addons-20210310190531-6496 top pods -n kube-system
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable metrics-server --alsologtostderr -v=1: (3.0467643s)
--- PASS: TestAddons/parallel/MetricsServer (68.11s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (59.41s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:332: tiller-deploy stabilized in 89.003ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:334: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:335: "tiller-deploy-7c86b7fbdf-pnrmr" [2e31381d-ea4d-4a92-aeb5-2610596240bd] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:334: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.1341756s
addons_test.go:349: (dbg) Run:  kubectl --context addons-20210310190531-6496 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:349: (dbg) Done: kubectl --context addons-20210310190531-6496 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (33.8992314s)
addons_test.go:354: kubectl --context addons-20210310190531-6496 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:349: (dbg) Run:  kubectl --context addons-20210310190531-6496 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:349: (dbg) Done: kubectl --context addons-20210310190531-6496 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (9.4913751s)
addons_test.go:354: kubectl --context addons-20210310190531-6496 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:349: (dbg) Run:  kubectl --context addons-20210310190531-6496 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version
addons_test.go:349: (dbg) Done: kubectl --context addons-20210310190531-6496 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (6.2583174s)
addons_test.go:366: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:366: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable helm-tiller --alsologtostderr -v=1: (3.1689629s)
--- PASS: TestAddons/parallel/HelmTiller (59.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (148.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:447: csi-hostpath-driver pods stabilized in 168.0328ms
addons_test.go:450: (dbg) Run:  kubectl --context addons-20210310190531-6496 create -f testdata\csi-hostpath-driver\pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:450: (dbg) Done: kubectl --context addons-20210310190531-6496 create -f testdata\csi-hostpath-driver\pvc.yaml: (2.2420549s)
addons_test.go:455: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210310190531-6496 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210310190531-6496 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:460: (dbg) Run:  kubectl --context addons-20210310190531-6496 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:460: (dbg) Done: kubectl --context addons-20210310190531-6496 create -f testdata\csi-hostpath-driver\pv-pod.yaml: (1.1760956s)
addons_test.go:465: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:335: "task-pv-pod" [8a77fd4d-1f02-43f6-af2c-7f9372f9bae4] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:335: "task-pv-pod" [8a77fd4d-1f02-43f6-af2c-7f9372f9bae4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:335: "task-pv-pod" [8a77fd4d-1f02-43f6-af2c-7f9372f9bae4] Running
addons_test.go:465: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 1m8.075374s
addons_test.go:470: (dbg) Run:  kubectl --context addons-20210310190531-6496 create -f testdata\csi-hostpath-driver\snapshotclass.yaml
addons_test.go:476: (dbg) Run:  kubectl --context addons-20210310190531-6496 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:481: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210310190531-6496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:486: (dbg) Run:  kubectl --context addons-20210310190531-6496 delete pod task-pv-pod
addons_test.go:486: (dbg) Done: kubectl --context addons-20210310190531-6496 delete pod task-pv-pod: (5.8546162s)
addons_test.go:492: (dbg) Run:  kubectl --context addons-20210310190531-6496 delete pvc hpvc
addons_test.go:498: (dbg) Run:  kubectl --context addons-20210310190531-6496 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:503: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210310190531-6496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210310190531-6496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:508: (dbg) Run:  kubectl --context addons-20210310190531-6496 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:335: "task-pv-pod-restore" [af6e8cfd-a831-4044-8398-56dd1dfc56ac] Pending
helpers_test.go:335: "task-pv-pod-restore" [af6e8cfd-a831-4044-8398-56dd1dfc56ac] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:335: "task-pv-pod-restore" [af6e8cfd-a831-4044-8398-56dd1dfc56ac] Running
addons_test.go:513: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.0388088s
addons_test.go:518: (dbg) Run:  kubectl --context addons-20210310190531-6496 delete pod task-pv-pod-restore
addons_test.go:518: (dbg) Done: kubectl --context addons-20210310190531-6496 delete pod task-pv-pod-restore: (3.7706553s)
addons_test.go:522: (dbg) Run:  kubectl --context addons-20210310190531-6496 delete pvc hpvc-restore
addons_test.go:526: (dbg) Run:  kubectl --context addons-20210310190531-6496 delete volumesnapshot new-snapshot-demo
addons_test.go:530: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:530: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.0646691s)
addons_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable volumesnapshots --alsologtostderr -v=1: (3.4965626s)
--- PASS: TestAddons/parallel/CSI (148.61s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (60.94s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:544: (dbg) Run:  kubectl --context addons-20210310190531-6496 create -f testdata\busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:544: (dbg) Done: kubectl --context addons-20210310190531-6496 create -f testdata\busybox.yaml: (2.5719456s)
addons_test.go:550: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:335: "busybox" [355288fa-3998-41ac-a439-b621b39e3e3f] Pending
helpers_test.go:335: "busybox" [355288fa-3998-41ac-a439-b621b39e3e3f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:335: "busybox" [355288fa-3998-41ac-a439-b621b39e3e3f] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:550: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 18.0785059s
addons_test.go:556: (dbg) Run:  kubectl --context addons-20210310190531-6496 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:556: (dbg) Done: kubectl --context addons-20210310190531-6496 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS": (1.0059606s)
addons_test.go:568: (dbg) Run:  kubectl --context addons-20210310190531-6496 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:568: (dbg) Done: kubectl --context addons-20210310190531-6496 exec busybox -- /bin/sh -c "cat /google-app-creds.json": (1.3751471s)
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210310190531-6496 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:591: (dbg) Done: kubectl --context addons-20210310190531-6496 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT": (1.2660866s)
addons_test.go:602: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:602: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20210310190531-6496 addons disable gcp-auth --alsologtostderr -v=1: (36.5851154s)
--- PASS: TestAddons/parallel/GCPAuth (60.94s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1202: local sync path: C:\Users\jenkins\.minikube\files\etc\test\nested\copy\6496\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (234.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:284: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210310191609-6496 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
functional_test.go:284: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210310191609-6496 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (3m54.7622179s)
--- PASS: TestFunctional/serial/StartWithProxy (234.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.03s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210310191609-6496 --alsologtostderr -v=8
functional_test.go:327: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210310191609-6496 --alsologtostderr -v=8: (32.7205755s)
functional_test.go:331: soft start took 32.7244436s for "functional-20210310191609-6496" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:347: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.15s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:360: (dbg) Run:  kubectl --context functional-20210310191609-6496 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:641: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 cache add k8s.gcr.io/pause:3.1
functional_test.go:641: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 cache add k8s.gcr.io/pause:3.1: (3.3561968s)
functional_test.go:641: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 cache add k8s.gcr.io/pause:3.3
functional_test.go:641: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 cache add k8s.gcr.io/pause:3.3: (3.3980042s)
functional_test.go:641: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 cache add k8s.gcr.io/pause:latest
functional_test.go:641: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 cache add k8s.gcr.io/pause:latest: (3.6216978s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:670: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210310191609-6496 C:\Users\jenkins\AppData\Local\Temp\functional-20210310191609-6496232354935
functional_test.go:670: (dbg) Done: docker build -t minikube-local-cache-test:functional-20210310191609-6496 C:\Users\jenkins\AppData\Local\Temp\functional-20210310191609-6496232354935: (1.300057s)
functional_test.go:675: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 cache add minikube-local-cache-test:functional-20210310191609-6496
functional_test.go:675: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 cache add minikube-local-cache-test:functional-20210310191609-6496: (2.4856641s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:682: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:689: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh sudo crictl images
functional_test.go:702: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh sudo crictl images: (2.717722s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (21.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:724: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:724: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh sudo docker rmi k8s.gcr.io/pause:latest: (2.7484863s)
functional_test.go:730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:730: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (2.5367447s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:735: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 cache reload
functional_test.go:735: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 cache reload: (13.9648963s)
functional_test.go:740: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:740: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (2.7254436s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (21.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:749: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:749: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.83s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:378: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 kubectl -- --context functional-20210310191609-6496 get pods
functional_test.go:378: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 kubectl -- --context functional-20210310191609-6496 get pods: (1.7332828s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (163.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:410: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210310191609-6496 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:410: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210310191609-6496 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m43.8666902s)
functional_test.go:414: restart took 2m43.8676934s for "functional-20210310191609-6496" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (163.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:461: (dbg) Run:  kubectl --context functional-20210310191609-6496 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:475: etcd phase: Running
functional_test.go:485: etcd status: Ready
functional_test.go:475: kube-apiserver phase: Running
functional_test.go:485: kube-apiserver status: Ready
functional_test.go:475: kube-controller-manager phase: Running
functional_test.go:485: kube-controller-manager status: Ready
functional_test.go:475: kube-scheduler phase: Running
functional_test.go:485: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:775: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:775: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 config get cpus
functional_test.go:775: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 config get cpus: exit status 14 (581.1019ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:775: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:775: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:775: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:775: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:775: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 config get cpus: exit status 14 (564.6781ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210310191609-6496 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:613: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20210310191609-6496 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (4.5465811s)

                                                
                                                
-- stdout --
	* [functional-20210310191609-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 19:24:44.883201    9864 out.go:239] Setting OutFile to fd 2660 ...
	I0310 19:24:44.885192    9864 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 19:24:44.885192    9864 out.go:252] Setting ErrFile to fd 2672...
	I0310 19:24:44.885192    9864 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 19:24:44.917776    9864 out.go:246] Setting JSON to false
	I0310 19:24:44.920780    9864 start.go:108] hostinfo: {"hostname":"windows-server-1","uptime":29750,"bootTime":1615374534,"procs":116,"os":"windows","platform":"Microsoft Windows Server 2019 Datacenter","platformFamily":"Server","platformVersion":"10.0.17763 Build 17763","kernelVersion":"10.0.17763 Build 17763","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"9879231f-6171-435d-bab4-5b366cc6391b"}
	W0310 19:24:44.920780    9864 start.go:116] gopshost.Virtualization returned error: not implemented yet
	I0310 19:24:44.940603    9864 out.go:129] * [functional-20210310191609-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	I0310 19:24:44.944607    9864 out.go:129]   - MINIKUBE_LOCATION=10722
	I0310 19:24:44.946606    9864 driver.go:323] Setting default libvirt URI to qemu:///system
	I0310 19:24:45.609594    9864 docker.go:119] docker version: linux-20.10.2
	I0310 19:24:45.619310    9864 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0310 19:24:46.899025    9864 cli_runner.go:168] Completed: docker system info --format "{{json .}}": (1.2797169s)
	I0310 19:24:46.900260    9864 info.go:253] docker info: {ID:A5IN:YOAR:ZZ6F:3DAS:6BY6:XIDO:7SK4:7R4Q:IDKL:K3V5:AOE6:UJT2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:52 SystemTime:2021-03-10 19:24:46.314177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:4.19.121-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:20973547520 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[]
ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:C:\ProgramData\Docker\cli-plugins\docker-app.exe SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:C:\ProgramData\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:C:\ProgramData\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
	I0310 19:24:46.912908    9864 out.go:129] * Using the docker driver based on existing profile
	I0310 19:24:46.913432    9864 start.go:276] selected driver: docker
	I0310 19:24:46.913432    9864 start.go:718] validating driver "docker" against &{Name:functional-20210310191609-6496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:functional-20210310191609-6496 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.97 Port:8441 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:
false volumesnapshots:false] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0310 19:24:46.913988    9864 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0310 19:24:48.906503    9864 out.go:129] 
	W0310 19:24:48.907019    9864 out.go:191] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0310 19:24:48.909948    9864 out.go:129] 

                                                
                                                
** /stderr **
functional_test.go:624: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20210310191609-6496 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:624: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20210310191609-6496 --dry-run --alsologtostderr -v=1 --driver=docker: (4.8884861s)
--- PASS: TestFunctional/parallel/DryRun (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (16.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:503: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:503: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 status: (6.6822571s)
functional_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:509: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (5.4204904s)
functional_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 status -o json: (4.6011208s)
--- PASS: TestFunctional/parallel/StatusCmd (16.70s)

                                                
                                    
x
+
TestFunctional/parallel/LogsCmd (30.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsCmd
=== PAUSE TestFunctional/parallel/LogsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:793: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 logs

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:793: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 logs: (30.8457988s)
--- PASS: TestFunctional/parallel/LogsCmd (30.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1082: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1082: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 addons list: (1.4946139s)
functional_test.go:1093: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (128.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:43: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:335: "storage-provisioner" [b3c89307-430b-4b9e-bf19-ea94207564fe] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:43: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.1681268s
fn_pvc_test.go:48: (dbg) Run:  kubectl --context functional-20210310191609-6496 get storageclass -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:68: (dbg) Run:  kubectl --context functional-20210310191609-6496 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:68: (dbg) Done: kubectl --context functional-20210310191609-6496 apply -f testdata/storage-provisioner/pvc.yaml: (1.9211579s)
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20210310191609-6496 get pvc myclaim -o=json
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20210310191609-6496 get pvc myclaim -o=json
fn_pvc_test.go:124: (dbg) Run:  kubectl --context functional-20210310191609-6496 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:124: (dbg) Done: kubectl --context functional-20210310191609-6496 apply -f testdata/storage-provisioner/pod.yaml: (1.1766772s)
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:335: "sp-pod" [b61d403c-522c-49fc-9951-04d17371a179] Pending
helpers_test.go:335: "sp-pod" [b61d403c-522c-49fc-9951-04d17371a179] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:335: "sp-pod" [b61d403c-522c-49fc-9951-04d17371a179] Running
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m44.1081823s
fn_pvc_test.go:99: (dbg) Run:  kubectl --context functional-20210310191609-6496 exec sp-pod -- touch /tmp/mount/foo
fn_pvc_test.go:105: (dbg) Run:  kubectl --context functional-20210310191609-6496 delete -f testdata/storage-provisioner/pod.yaml
fn_pvc_test.go:105: (dbg) Done: kubectl --context functional-20210310191609-6496 delete -f testdata/storage-provisioner/pod.yaml: (3.5382084s)
fn_pvc_test.go:124: (dbg) Run:  kubectl --context functional-20210310191609-6496 apply -f testdata/storage-provisioner/pod.yaml
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:335: "sp-pod" [84edd5c9-8bed-47a9-a941-79812a7a92a2] Pending
helpers_test.go:335: "sp-pod" [84edd5c9-8bed-47a9-a941-79812a7a92a2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:335: "sp-pod" [84edd5c9-8bed-47a9-a941-79812a7a92a2] Running
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0339277s
fn_pvc_test.go:113: (dbg) Run:  kubectl --context functional-20210310191609-6496 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (128.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (9.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "echo hello": (5.0991935s)
functional_test.go:1132: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1132: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "cat /etc/hostname": (4.7042802s)
--- PASS: TestFunctional/parallel/SSHCmd (9.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (106.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1154: (dbg) Run:  kubectl --context functional-20210310191609-6496 replace --force -f testdata\mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1154: (dbg) Done: kubectl --context functional-20210310191609-6496 replace --force -f testdata\mysql.yaml: (3.7731746s)
functional_test.go:1159: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:335: "mysql-9bbbc5bbb-fk6dk" [52ef5685-caa1-41a7-baad-5ef0345dab0c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:335: "mysql-9bbbc5bbb-fk6dk" [52ef5685-caa1-41a7-baad-5ef0345dab0c] Running
functional_test.go:1159: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m16.1045962s
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;"
functional_test.go:1166: (dbg) Non-zero exit: kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;": exit status 1 (1.1120562s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;"
functional_test.go:1166: (dbg) Non-zero exit: kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;": exit status 1 (799.8949ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;"
functional_test.go:1166: (dbg) Non-zero exit: kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;": exit status 1 (979.3711ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;"
functional_test.go:1166: (dbg) Non-zero exit: kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;": exit status 1 (1.1481606s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1166: (dbg) Non-zero exit: kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;": exit status 1 (1.1664261s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;"
functional_test.go:1166: (dbg) Non-zero exit: kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;": exit status 1 (1.7040471s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1166: (dbg) Run:  kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;"
functional_test.go:1166: (dbg) Done: kubectl --context functional-20210310191609-6496 exec mysql-9bbbc5bbb-fk6dk -- mysql -ppassword -e "show databases;": (1.090997s)
--- PASS: TestFunctional/parallel/MySQL (106.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (5.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1250: Checking for existence of /etc/test/nested/copy/6496/hosts within VM
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "sudo cat /etc/test/nested/copy/6496/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "sudo cat /etc/test/nested/copy/6496/hosts": (5.5963042s)
functional_test.go:1256: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (5.60s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (14.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1291: Checking for existence of /etc/ssl/certs/6496.pem within VM
functional_test.go:1292: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "sudo cat /etc/ssl/certs/6496.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1292: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "sudo cat /etc/ssl/certs/6496.pem": (5.4896072s)
functional_test.go:1291: Checking for existence of /usr/share/ca-certificates/6496.pem within VM
functional_test.go:1292: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "sudo cat /usr/share/ca-certificates/6496.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1292: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "sudo cat /usr/share/ca-certificates/6496.pem": (4.7169982s)
functional_test.go:1291: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1292: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1292: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 ssh "sudo cat /etc/ssl/certs/51391683.0": (3.8234054s)
--- PASS: TestFunctional/parallel/CertSync (14.03s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:152: (dbg) Run:  kubectl --context functional-20210310191609-6496 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (22.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:175: (dbg) Run:  docker pull busybox:latest

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:175: (dbg) Done: docker pull busybox:latest: (2.5570781s)
functional_test.go:182: (dbg) Run:  docker tag busybox:latest busybox:functional-20210310191609-6496

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 image load busybox:functional-20210310191609-6496

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 image load busybox:functional-20210310191609-6496: (14.3311019s)
functional_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p functional-20210310191609-6496 -- docker image inspect busybox:functional-20210310191609-6496

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe ssh -p functional-20210310191609-6496 -- docker image inspect busybox:functional-20210310191609-6496: (5.1070738s)
--- PASS: TestFunctional/parallel/LoadImage (22.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1385: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1385: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 update-context --alsologtostderr -v=2: (2.1554431s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/StartTunnel
fn_tunnel_cmd_test.go:125: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20210310191609-6496 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1385: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1385: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 update-context --alsologtostderr -v=2: (1.9105092s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1385: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1385: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 update-context --alsologtostderr -v=2: (2.061648s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (5.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:819: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:819: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (1.0857925s)
functional_test.go:823: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:823: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.3810272s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:857: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:857: (dbg) Done: out/minikube-windows-amd64.exe profile list: (5.6074278s)
functional_test.go:862: Took "5.6074278s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:871: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:876: Took "602.3565ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (5.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:907: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:907: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (5.295221s)
functional_test.go:912: Took "5.295221s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:925: Took "477.0629ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (5.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
fn_tunnel_cmd_test.go:163: (dbg) Run:  kubectl --context functional-20210310191609-6496 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
fn_tunnel_cmd_test.go:363: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20210310191609-6496 tunnel --alsologtostderr] ...
helpers_test.go:499: unable to kill pid 5716: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (3.17s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:144: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20210310193422-6496 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:144: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20210310193422-6496 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (391.6483ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210310193422-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"24c60d5c-3f82-4f2b-a55d-8cf7762a047b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=10722"},"datacontenttype":"application/json","id":"6c7a78d3-2cc1-49a4-81b3-1533508393a7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"7605f871-4259-4815-ae85-cb1ca0fb2539","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:171: Cleaning up "json-output-error-20210310193422-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20210310193422-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20210310193422-6496: (2.7769358s)
--- PASS: TestErrorJSONOutput (3.17s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (180.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20210310193425-6496 --network=
kic_custom_network_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20210310193425-6496 --network=: (2m49.3744971s)
kic_custom_network_test.go:99: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:171: Cleaning up "docker-network-20210310193425-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20210310193425-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20210310193425-6496: (10.8739505s)
--- PASS: TestKicCustomNetwork/create_custom_network (180.78s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (177.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20210310193726-6496 --network=bridge
kic_custom_network_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20210310193726-6496 --network=bridge: (2m47.2100473s)
kic_custom_network_test.go:99: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:171: Cleaning up "docker-network-20210310193726-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20210310193726-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20210310193726-6496: (10.1276176s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (177.88s)

                                                
                                    
x
+
TestKicExistingNetwork (178.46s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:99: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20210310194026-6496 --network=existing-network
kic_custom_network_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20210310194026-6496 --network=existing-network: (2m44.6143455s)
helpers_test.go:171: Cleaning up "existing-network-20210310194026-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20210310194026-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20210310194026-6496: (10.8335482s)
--- PASS: TestKicExistingNetwork (178.46s)

                                                
                                    
x
+
TestMainNoArgs (0.35s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (356.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:73: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210310194323-6496 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
multinode_test.go:73: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210310194323-6496 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (5m50.8143726s)
multinode_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status --alsologtostderr
multinode_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status --alsologtostderr: (5.2360517s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (356.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (115.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:97: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20210310194323-6496 -v 3 --alsologtostderr
multinode_test.go:97: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20210310194323-6496 -v 3 --alsologtostderr: (1m49.0783256s)
multinode_test.go:103: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status --alsologtostderr
multinode_test.go:103: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status --alsologtostderr: (6.8697796s)
--- PASS: TestMultiNode/serial/AddNode (115.95s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (2.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.8477243s)
--- PASS: TestMultiNode/serial/ProfileList (2.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:157: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 node stop m03
multinode_test.go:157: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 node stop m03: (3.9231148s)
multinode_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status
multinode_test.go:163: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status: exit status 7 (5.2946799s)

                                                
                                                
-- stdout --
	multinode-20210310194323-6496
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	timeToStop: Nonexistent
	
	multinode-20210310194323-6496-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210310194323-6496-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status --alsologtostderr
multinode_test.go:170: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status --alsologtostderr: exit status 7 (5.2397485s)

                                                
                                                
-- stdout --
	multinode-20210310194323-6496
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	timeToStop: Nonexistent
	
	multinode-20210310194323-6496-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210310194323-6496-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 19:51:27.469728    5368 out.go:239] Setting OutFile to fd 2952 ...
	I0310 19:51:27.475701    5368 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 19:51:27.475701    5368 out.go:252] Setting ErrFile to fd 2956...
	I0310 19:51:27.475701    5368 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 19:51:27.485711    5368 out.go:246] Setting JSON to false
	I0310 19:51:27.486703    5368 mustload.go:66] Loading cluster: multinode-20210310194323-6496
	I0310 19:51:27.490553    5368 status.go:241] checking status of multinode-20210310194323-6496 ...
	I0310 19:51:27.512132    5368 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format={{.State.Status}}
	I0310 19:51:28.001051    5368 status.go:317] multinode-20210310194323-6496 host status = "Running" (err=<nil>)
	I0310 19:51:28.001051    5368 host.go:66] Checking if "multinode-20210310194323-6496" exists ...
	I0310 19:51:28.011433    5368 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210310194323-6496
	I0310 19:51:28.528239    5368 host.go:66] Checking if "multinode-20210310194323-6496" exists ...
	I0310 19:51:28.541041    5368 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 19:51:28.547651    5368 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:51:29.074470    5368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55034 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496\id_rsa Username:docker}
	I0310 19:51:29.247897    5368 ssh_runner.go:149] Run: systemctl --version
	I0310 19:51:29.281810    5368 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 19:51:29.331575    5368 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20210310194323-6496
	I0310 19:51:29.834909    5368 kubeconfig.go:93] found "multinode-20210310194323-6496" server: "https://127.0.0.1:55031"
	I0310 19:51:29.834909    5368 api_server.go:146] Checking apiserver status ...
	I0310 19:51:29.848595    5368 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0310 19:51:29.928872    5368 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2578/cgroup
	I0310 19:51:29.974858    5368 api_server.go:162] apiserver freezer: "7:freezer:/docker/86f284706e15db565cb427f12276c0b374db713559daa226eba17b53d718b32f/kubepods/burstable/pod906db1d630d3e27d87ec5bf8a9967c21/11af52e50d91921ee55c63ac657051f9330b1f34cfda62cd12b45518def1c750"
	I0310 19:51:29.985333    5368 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/86f284706e15db565cb427f12276c0b374db713559daa226eba17b53d718b32f/kubepods/burstable/pod906db1d630d3e27d87ec5bf8a9967c21/11af52e50d91921ee55c63ac657051f9330b1f34cfda62cd12b45518def1c750/freezer.state
	I0310 19:51:30.019586    5368 api_server.go:184] freezer state: "THAWED"
	I0310 19:51:30.020116    5368 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:55031/healthz ...
	I0310 19:51:30.053610    5368 api_server.go:241] https://127.0.0.1:55031/healthz returned 200:
	ok
	I0310 19:51:30.053610    5368 status.go:402] multinode-20210310194323-6496 apiserver status = Running (err=<nil>)
	I0310 19:51:30.053610    5368 status.go:243] multinode-20210310194323-6496 status: &{Name:multinode-20210310194323-6496 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop:Nonexistent}
	I0310 19:51:30.054264    5368 status.go:241] checking status of multinode-20210310194323-6496-m02 ...
	I0310 19:51:30.069451    5368 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496-m02 --format={{.State.Status}}
	I0310 19:51:30.571345    5368 status.go:317] multinode-20210310194323-6496-m02 host status = "Running" (err=<nil>)
	I0310 19:51:30.571345    5368 host.go:66] Checking if "multinode-20210310194323-6496-m02" exists ...
	I0310 19:51:30.582261    5368 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210310194323-6496-m02
	I0310 19:51:31.111984    5368 host.go:66] Checking if "multinode-20210310194323-6496-m02" exists ...
	I0310 19:51:31.122668    5368 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0310 19:51:31.131034    5368 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210310194323-6496-m02
	I0310 19:51:31.640271    5368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55039 SSHKeyPath:C:\Users\jenkins\.minikube\machines\multinode-20210310194323-6496-m02\id_rsa Username:docker}
	I0310 19:51:31.797451    5368 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0310 19:51:31.828175    5368 status.go:243] multinode-20210310194323-6496-m02 status: &{Name:multinode-20210310194323-6496-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop:Nonexistent}
	I0310 19:51:31.828953    5368 status.go:241] checking status of multinode-20210310194323-6496-m03 ...
	I0310 19:51:31.846435    5368 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496-m03 --format={{.State.Status}}
	I0310 19:51:32.349129    5368 status.go:317] multinode-20210310194323-6496-m03 host status = "Stopped" (err=<nil>)
	I0310 19:51:32.349129    5368 status.go:330] host is not running, skipping remaining checks
	I0310 19:51:32.349129    5368 status.go:243] multinode-20210310194323-6496-m03 status: &{Name:multinode-20210310194323-6496-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop:Nonexistent}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:190: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 node start m03 --alsologtostderr
multinode_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 node start m03 --alsologtostderr: (40.7985511s)
multinode_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status
multinode_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status: (6.5507421s)
multinode_test.go:221: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (48.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 node delete m03
multinode_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 node delete m03: (18.2580976s)
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status --alsologtostderr
multinode_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status --alsologtostderr: (4.6029338s)
multinode_test.go:328: (dbg) Run:  docker volume ls
multinode_test.go:338: (dbg) Run:  kubectl get nodes
multinode_test.go:346: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 stop
multinode_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 stop: (18.2998115s)
multinode_test.go:235: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status
multinode_test.go:235: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status: exit status 7 (1.3625402s)

                                                
                                                
-- stdout --
	multinode-20210310194323-6496
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	timeToStop: Nonexistent
	
	multinode-20210310194323-6496-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:242: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status --alsologtostderr
multinode_test.go:242: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20210310194323-6496 status --alsologtostderr: exit status 7 (1.356591s)

                                                
                                                
-- stdout --
	multinode-20210310194323-6496
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	timeToStop: Nonexistent
	
	multinode-20210310194323-6496-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0310 19:53:04.676489    8404 out.go:239] Setting OutFile to fd 2968 ...
	I0310 19:53:04.678462    8404 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 19:53:04.678462    8404 out.go:252] Setting ErrFile to fd 2512...
	I0310 19:53:04.679487    8404 out.go:286] TERM=,COLORTERM=, which probably does not support color
	I0310 19:53:04.687502    8404 out.go:246] Setting JSON to false
	I0310 19:53:04.687502    8404 mustload.go:66] Loading cluster: multinode-20210310194323-6496
	I0310 19:53:04.688508    8404 status.go:241] checking status of multinode-20210310194323-6496 ...
	I0310 19:53:04.713521    8404 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496 --format={{.State.Status}}
	I0310 19:53:05.174919    8404 status.go:317] multinode-20210310194323-6496 host status = "Stopped" (err=<nil>)
	I0310 19:53:05.175339    8404 status.go:330] host is not running, skipping remaining checks
	I0310 19:53:05.175339    8404 status.go:243] multinode-20210310194323-6496 status: &{Name:multinode-20210310194323-6496 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop:Nonexistent}
	I0310 19:53:05.175339    8404 status.go:241] checking status of multinode-20210310194323-6496-m02 ...
	I0310 19:53:05.194066    8404 cli_runner.go:115] Run: docker container inspect multinode-20210310194323-6496-m02 --format={{.State.Status}}
	I0310 19:53:05.674148    8404 status.go:317] multinode-20210310194323-6496-m02 host status = "Stopped" (err=<nil>)
	I0310 19:53:05.674148    8404 status.go:330] host is not running, skipping remaining checks
	I0310 19:53:05.674148    8404 status.go:243] multinode-20210310194323-6496-m02 status: &{Name:multinode-20210310194323-6496-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop:Nonexistent}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (195.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:356: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20210310194323-6496
multinode_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210310194323-6496-m02 --driver=docker
multinode_test.go:365: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20210310194323-6496-m02 --driver=docker: exit status 14 (410.4568ms)

                                                
                                                
-- stdout --
	* [multinode-20210310194323-6496-m02] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763
	  - MINIKUBE_LOCATION=10722
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210310194323-6496-m02' is duplicated with machine name 'multinode-20210310194323-6496-m02' in profile 'multinode-20210310194323-6496'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:373: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20210310194323-6496-m03 --driver=docker
multinode_test.go:373: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20210310194323-6496-m03 --driver=docker: (3m1.1546617s)
multinode_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20210310194323-6496
multinode_test.go:380: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20210310194323-6496: exit status 80 (2.4079283s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210310194323-6496
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210310194323-6496-m03 already exists in multinode-20210310194323-6496-m03 profile
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
multinode_test.go:385: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20210310194323-6496-m03
multinode_test.go:385: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20210310194323-6496-m03: (11.4672395s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (195.83s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (0.00s)

                                                
                                    
x
+
TestPreload (342.59s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:47: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20210310200323-6496 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
preload_test.go:47: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20210310200323-6496 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (3m26.4303235s)
preload_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20210310200323-6496 -- docker pull busybox
preload_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20210310200323-6496 -- docker pull busybox: (4.6057239s)
preload_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20210310200323-6496 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
preload_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20210310200323-6496 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (1m56.7798579s)
preload_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20210310200323-6496 -- docker images
preload_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20210310200323-6496 -- docker images: (3.0406329s)
helpers_test.go:171: Cleaning up "test-preload-20210310200323-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20210310200323-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20210310200323-6496: (11.7269252s)
--- PASS: TestPreload (342.59s)

                                                
                                    
x
+
TestScheduledStopWindows (210.02s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20210310200905-6496 --memory=1900 --driver=docker
scheduled_stop_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20210310200905-6496 --memory=1900 --driver=docker: (2m45.8574s)
scheduled_stop_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20210310200905-6496 --schedule 5m
scheduled_stop_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20210310200905-6496 --schedule 5m: (2.4131578s)
scheduled_stop_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496
scheduled_stop_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496: (2.8750116s)
scheduled_stop_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20210310200905-6496 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20210310200905-6496 -- sudo systemctl show minikube-scheduled-stop --no-page: (2.6331028s)
scheduled_stop_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20210310200905-6496 --schedule 5s
scheduled_stop_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20210310200905-6496 --schedule 5s: (2.9271137s)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496: exit status 3 (4.4549063s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:12:12.023205    9488 status.go:363] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E0310 20:12:12.026643    9488 status.go:235] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: status error: exit status 3 (may be ok)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496: exit status 3 (4.2643528s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:12:17.253802    3368 status.go:363] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: EOF
	E0310 20:12:17.260207    3368 status.go:235] status error: NewSession: new client: new client: ssh: handshake failed: EOF

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: status error: exit status 3 (may be ok)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496: exit status 3 (4.2592797s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:12:22.696040    3808 status.go:363] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "scheduled-stop-20210310200905-6496": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-20210310200905-6496: exit status 1
	stdout:
	
	
	stderr:
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	E0310 20:12:22.700283    3808 status.go:235] status error: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "scheduled-stop-20210310200905-6496": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-20210310200905-6496: exit status 1
	stdout:
	
	
	stderr:
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: status error: exit status 3 (may be ok)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496: exit status 7 (877.4488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:172: status error: exit status 7 (may be ok)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20210310200905-6496 -n scheduled-stop-20210310200905-6496: exit status 7 (892.1948ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
scheduled_stop_test.go:172: status error: exit status 7 (may be ok)
helpers_test.go:171: Cleaning up "scheduled-stop-20210310200905-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20210310200905-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20210310200905-6496: (9.6227743s)
--- PASS: TestScheduledStopWindows (210.02s)

                                                
                                    
x
+
TestInsufficientStorage (39.8s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20210310201557-6496 --memory=1900 --output=json --wait=true --driver=docker
status_test.go:49: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20210310201557-6496 --memory=1900 --output=json --wait=true --driver=docker: exit status 26 (26.3878724s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210310201557-6496] minikube v1.18.1 on Microsoft Windows Server 2019 Datacenter 10.0.17763 Build 17763","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"1bfe60d1-62b9-4367-af6f-267025d5c393","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=10722"},"datacontenttype":"application/json","id":"cd095cd4-6156-4016-bdd9-5092cdd39654","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"67d6d109-6de0-46e2-8feb-c67be03b0fcc","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"71474fc0-69ef-46cf-9f22-89d161aaab0a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210310201557-6496 in cluster insufficient-storage-20210310201557-6496","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"3ca7a1c4-70ea-4f17-92a2-142dde189ad7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=1900MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"d907198c-0f68-4992-9a70-00d454add30b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"16ee6a74-0b45-42b1-8712-c71ac8151609","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:75: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20210310201557-6496 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20210310201557-6496 --output=json --layout=cluster: exit status 7 (2.5971565s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210310201557-6496","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=1900MB) ...","BinaryVersion":"v1.18.1","TimeToStop":"Nonexistent","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210310201557-6496","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:16:26.702625    7856 status.go:396] kubeconfig endpoint: extract IP: "insufficient-storage-20210310201557-6496" does not appear in C:\Users\jenkins/.kube/config

                                                
                                                
** /stderr **
status_test.go:75: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20210310201557-6496 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20210310201557-6496 --output=json --layout=cluster: exit status 7 (2.5949836s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210310201557-6496","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.18.1","TimeToStop":"Nonexistent","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210310201557-6496","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0310 20:16:29.290502    7716 status.go:396] kubeconfig endpoint: extract IP: "insufficient-storage-20210310201557-6496" does not appear in C:\Users\jenkins/.kube/config
	E0310 20:16:29.349037    7716 status.go:540] unable to read event log: stat: CreateFile C:\Users\jenkins\.minikube\profiles\insufficient-storage-20210310201557-6496\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:171: Cleaning up "insufficient-storage-20210310201557-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20210310201557-6496
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20210310201557-6496: (8.2124152s)
--- PASS: TestInsufficientStorage (39.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (21.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20210310204459-6496 --alsologtostderr -v=3
start_stop_delete_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-20210310204459-6496 --alsologtostderr -v=3: (21.815339s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (21.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210310204459-6496 -n old-k8s-version-20210310204459-6496
start_stop_delete_test.go:180: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20210310204459-6496 -n old-k8s-version-20210310204459-6496: exit status 7 (1.0778692s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:180: status error: exit status 7 (may be ok)
start_stop_delete_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20210310204459-6496
start_stop_delete_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20210310204459-6496: (1.042471s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (2.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (199s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context embed-certs-20210310205017-6496 create -f testdata\busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:164: (dbg) Done: kubectl --context embed-certs-20210310205017-6496 create -f testdata\busybox.yaml: (8.1330268s)
start_stop_delete_test.go:164: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:335: "busybox" [6db164f1-ae24-4e17-af81-56bd01054888] Pending

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:335: "busybox" [6db164f1-ae24-4e17-af81-56bd01054888] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:335: "busybox" [6db164f1-ae24-4e17-af81-56bd01054888] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:164: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 2m49.9866995s
start_stop_delete_test.go:164: (dbg) Run:  kubectl --context embed-certs-20210310205017-6496 exec busybox -- /bin/sh -c "ulimit -n"

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:164: (dbg) Done: kubectl --context embed-certs-20210310205017-6496 exec busybox -- /bin/sh -c "ulimit -n": (20.8572232s)
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (199.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (88.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:202: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20210310201637-6496
version_upgrade_test.go:202: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20210310201637-6496: (1m28.395842s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (88.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (35.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20210310205017-6496 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-20210310205017-6496 --alsologtostderr -v=3: (35.8901345s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (35.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210310205017-6496 -n embed-certs-20210310205017-6496
start_stop_delete_test.go:180: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20210310205017-6496 -n embed-certs-20210310205017-6496: exit status 7 (1.2286944s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:180: status error: exit status 7 (may be ok)
start_stop_delete_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20210310205017-6496
start_stop_delete_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20210310205017-6496: (1.2372218s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (2.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (34.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20210310204947-6496 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-20210310204947-6496 --alsologtostderr -v=3: (34.1665584s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (34.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210310204947-6496 -n no-preload-20210310204947-6496
start_stop_delete_test.go:180: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20210310204947-6496 -n no-preload-20210310204947-6496: exit status 7 (1.2013803s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:180: status error: exit status 7 (may be ok)
start_stop_delete_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20210310204947-6496

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20210310204947-6496: (1.3627979s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (2.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (1241.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20210310212126-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-20210310212126-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: (20m41.437539s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (1241.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (793.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20210310212817-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-20210310212817-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: (13m13.0411441s)
--- PASS: TestNetworkPlugins/group/bridge/Start (793.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (715.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20210310213042-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-20210310213042-6496 --memory=1800 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: (11m55.2680872s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (715.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-20210310212817-6496 "pgrep -a kubelet"
net_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-20210310212817-6496 "pgrep -a kubelet": (3.5310918s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (3.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (61.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context bridge-20210310212817-6496 replace --force -f testdata\netcat-deployment.yaml
net_test.go:110: (dbg) Done: kubectl --context bridge-20210310212817-6496 replace --force -f testdata\netcat-deployment.yaml: (1.9781653s)
net_test.go:124: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-gdgnx" [da4cad38-1b4f-4e64-943f-5c7cdc0864de] Pending
helpers_test.go:335: "netcat-66fbc655d5-gdgnx" [da4cad38-1b4f-4e64-943f-5c7cdc0864de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:335: "netcat-66fbc655d5-gdgnx" [da4cad38-1b4f-4e64-943f-5c7cdc0864de] Running
net_test.go:124: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 59.0840866s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (61.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (3.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-20210310212126-6496 "pgrep -a kubelet"
net_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-20210310212126-6496 "pgrep -a kubelet": (3.7396951s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (3.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (58.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context enable-default-cni-20210310212126-6496 replace --force -f testdata\netcat-deployment.yaml
net_test.go:110: (dbg) Done: kubectl --context enable-default-cni-20210310212126-6496 replace --force -f testdata\netcat-deployment.yaml: (1.7368408s)
net_test.go:124: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-pm8nv" [29b5afae-384d-4155-a4b3-6568f311a2e3] Pending
helpers_test.go:335: "netcat-66fbc655d5-pm8nv" [29b5afae-384d-4155-a4b3-6568f311a2e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:335: "netcat-66fbc655d5-pm8nv" [29b5afae-384d-4155-a4b3-6568f311a2e3] Running
net_test.go:124: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 56.0648704s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (58.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (1.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:141: (dbg) Run:  kubectl --context bridge-20210310212817-6496 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:141: (dbg) Done: kubectl --context bridge-20210310212817-6496 exec deployment/netcat -- nslookup kubernetes.default: (1.3485848s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (1.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:160: (dbg) Run:  kubectl --context bridge-20210310212817-6496 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Localhost
net_test.go:160: (dbg) Done: kubectl --context bridge-20210310212817-6496 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080": (3.7428311s)
--- PASS: TestNetworkPlugins/group/bridge/Localhost (3.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-20210310213042-6496 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-20210310213042-6496 "pgrep -a kubelet": (3.9376004s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (3.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (2.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:173: (dbg) Run:  kubectl --context bridge-20210310212817-6496 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/HairPin
net_test.go:173: (dbg) Done: kubectl --context bridge-20210310212817-6496 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": (2.5881333s)
--- PASS: TestNetworkPlugins/group/bridge/HairPin (2.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (37.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:110: (dbg) Run:  kubectl --context kubenet-20210310213042-6496 replace --force -f testdata\netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:110: (dbg) Done: kubectl --context kubenet-20210310213042-6496 replace --force -f testdata\netcat-deployment.yaml: (2.2493811s)
net_test.go:124: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-5fb96" [500732d4-3373-40ab-91b2-6c55467c3d8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:335: "netcat-66fbc655d5-5fb96" [500732d4-3373-40ab-91b2-6c55467c3d8b] Running
net_test.go:124: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 34.2058387s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (37.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (1.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:141: (dbg) Run:  kubectl --context enable-default-cni-20210310212126-6496 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:141: (dbg) Done: kubectl --context enable-default-cni-20210310212126-6496 exec deployment/netcat -- nslookup kubernetes.default: (1.054169s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (1.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:160: (dbg) Run:  kubectl --context enable-default-cni-20210310212126-6496 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:173: (dbg) Run:  kubectl --context enable-default-cni-20210310212126-6496 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (1.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:141: (dbg) Run:  kubectl --context kubenet-20210310213042-6496 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:141: (dbg) Done: kubectl --context kubenet-20210310213042-6496 exec deployment/netcat -- nslookup kubernetes.default: (1.1094507s)
--- PASS: TestNetworkPlugins/group/kubenet/DNS (1.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (1.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:160: (dbg) Run:  kubectl --context kubenet-20210310213042-6496 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Localhost
net_test.go:160: (dbg) Done: kubectl --context kubenet-20210310213042-6496 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080": (1.3413249s)
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (1.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (1.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:173: (dbg) Run:  kubectl --context kubenet-20210310213042-6496 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:173: (dbg) Done: kubectl --context kubenet-20210310213042-6496 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": (1.0739083s)
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (1.08s)

                                                
                                    

Test skip (19/176)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:116: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/cached-images
aaa_download_only_test.go:116: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.5-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.5-rc.0/cached-images
aaa_download_only_test.go:116: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.5-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (42.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:211: registry stabilized in 90.0253ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:213: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:335: "registry-2gkgh" [df9ae87f-1229-4518-b2f2-061949a1e257] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:213: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.1659001s
addons_test.go:216: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:335: "registry-proxy-5qn2p" [a260c4fc-f64b-4116-a145-e727540dad03] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:216: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0948928s
addons_test.go:221: (dbg) Run:  kubectl --context addons-20210310190531-6496 delete po -l run=registry-test --now
addons_test.go:226: (dbg) Run:  kubectl --context addons-20210310190531-6496 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:226: (dbg) Done: kubectl --context addons-20210310190531-6496 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (31.1296052s)
addons_test.go:236: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (42.08s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:373: Skipping olm test till this timeout issue is solved https://github.com/operator-framework/operator-lifecycle-manager/issues/1534#issuecomment-632342257
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:42: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:186: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:551: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url -p functional-20210310191609-6496 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:562: output didn't produce a URL
functional_test.go:556: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url -p functional-20210310191609-6496 --alsologtostderr -v=1] ...
helpers_test.go:481: unable to find parent, assuming dead: process does not exist
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
fn_mount_cmd_test.go:55: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (101.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:974: (dbg) Run:  kubectl --context functional-20210310191609-6496 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:980: (dbg) Run:  kubectl --context functional-20210310191609-6496 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:980: (dbg) Done: kubectl --context functional-20210310191609-6496 expose deployment hello-node --type=NodePort --port=8080: (1.5500149s)
functional_test.go:985: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:335: "hello-node-6cbfcd7cbc-9qfsw" [95e18738-e4d4-4b7f-acef-874cc6b8a09c] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:335: "hello-node-6cbfcd7cbc-9qfsw" [95e18738-e4d4-4b7f-acef-874cc6b8a09c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:335: "hello-node-6cbfcd7cbc-9qfsw" [95e18738-e4d4-4b7f-acef-874cc6b8a09c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:985: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 1m35.0432565s
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20210310191609-6496 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:989: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20210310191609-6496 service list: (4.2677273s)
functional_test.go:998: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (101.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
fn_tunnel_cmd_test.go:187: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:33: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:78: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel
net_test.go:66: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
--- SKIP: TestNetworkPlugins/group/flannel (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (5.88s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:89: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:171: Cleaning up "disable-driver-mounts-20210310205156-6496" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20210310205156-6496

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
helpers_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20210310205156-6496: (5.8822651s)
--- SKIP: TestStartStop/group/disable-driver-mounts (5.88s)

                                                
                                    
Copied to clipboard